Pricing
| Type | Price (per 1M) |
|---|---|
| Input tokens | $0.05 |
| Output tokens | $0.05 |
Capabilities
VisionMultimodalReasoningFunction CallingTool UseJSON ModeCode Execution
About Prompt Guard 86M
PromptGuard is a classifier model trained on a large corpus of attacks, which is capable of detecting both explicitly malicious prompts (Jailbreaks) as well as prompts that contain injected inputs (Prompt Injections).
Get Started
Model Specs
Released2024-07-23
Parameters279M
Context512
ArchitectureDecoder Only