LLM ReferenceLLM Reference

Zephyr 7B Alpha

About

The Zephyr 7B Alpha is a 7-billion parameter language model fine-tuned from the Mistral-7B-v0.1 framework. It serves as an AI assistant, primarily optimizing its performance using Direct Preference Optimization. Although it excels in English text generation and conversational tasks, its training with a mix of public and synthetic datasets—like UltraChat and UltraFeedback—brings a higher risk of generating problematic content due to lesser alignment with human safety standards compared to models like ChatGPT. The model's architecture is GPT-like, offering several quantized versions such as GPTQ and GGUF, which trade-off model size for performance, but may affect accuracy. Its broader capabilities extend to multiple languages to a limited degree, and its performance varies by version and quantization method used.

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

Providers(2)

Compare all →
ProviderInput (per 1M)Output (per 1M)Type
Baseten APIServerless
Replicate API$0.05$0.25Serverless

Rankings

Specifications

FamilyZephyr
Released2023-10-26
Parameters7B
ArchitectureDecoder Only
Specializationgeneral
Trainingfinetuning

Created by

Community-driven open-source AI model hub

New York City, New York, United States
Founded 2016
Website