LLM ReferenceLLM Reference

Prompt Guard 86M

About

PromptGuard is a classifier model trained on a large corpus of attacks, which is capable of detecting both explicitly malicious prompts (Jailbreaks) as well as prompts that contain injected inputs (Prompt Injections).

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

Providers(1)

ProviderInput (per 1M)Output (per 1M)Type
Microsoft Foundry$0.05$0.05Provisioned

Rankings

Specifications

Released2024-07-23
Parameters279M
Context512
ArchitectureDecoder Only
Specializationgeneral
Trainingfinetuning

Created by

Large-scale open-source AI for social technologies.

Menlo Park, California, United States
Founded 2013
Website

Providers(1)