Pricing
| Type | Price (per 1M) |
|---|---|
| Input tokens | $0.20 |
| Output tokens | $0.20 |
Capabilities
About Zephyr 7B Beta
Zephyr 7B Beta is a 7-billion parameter large language model, fine-tuned from the Mistral-7B-v0.1 model. It is tailored to serve as an effective virtual assistant, performing well in generating human-like responses. The model's training involved Direct Preference Optimization (DPO) on a combination of publicly available and synthetic datasets, achieving strong performance on benchmarks like MT-Bench and AlpacaEval, especially for conversational tasks. However, its complexity falls short when compared to proprietary models, especially in tasks involving coding and mathematics. A notable limitation is its insufficient alignment to human safety preferences and the absence of in-the-loop filtering to prevent problematic outputs. Zephyr 7B Beta is English-based and carries an MIT license.