LLM Reference

Prompt Guard 86M

About

PromptGuard is a classifier model trained on a large corpus of attacks, which is capable of detecting both explicitly malicious prompts (Jailbreaks) as well as prompts that contain injected inputs (Prompt Injections).

Capabilities

MultimodalFunction CallingTool UseJSON Mode

Providers(1)

ProviderInput (per 1M)Output (per 1M)Type
Azure OpenAI
Provisioned

Specifications

Released2024-07-23
Parameters279M
Context512
ArchitectureDecoder Only
Specializationgeneral