LLM Reference

Zephyr 7B Alpha

About

The Zephyr 7B Alpha is a 7-billion parameter language model fine-tuned from the Mistral-7B-v0.1 framework. It serves as an AI assistant, primarily optimizing its performance using Direct Preference Optimization. Although it excels in English text generation and conversational tasks, its training with a mix of public and synthetic datasets—like UltraChat and UltraFeedback—brings a higher risk of generating problematic content due to lesser alignment with human safety standards compared to models like ChatGPT. The model's architecture is GPT-like, offering several quantized versions such as GPTQ and GGUF, which trade-off model size for performance, but may affect accuracy. Its broader capabilities extend to multiple languages to a limited degree, and its performance varies by version and quantization method used.

Capabilities

MultimodalFunction CallingTool UseJSON Mode

Providers(2)

ProviderInput (per 1M)Output (per 1M)Type
Baseten API
Serverless
Replicate API
Serverless

Specifications

FamilyZephyr
Parameters7B
ArchitectureDecoder Only
Specializationgeneral