LLM Reference
Fireworks AI

Zephyr 7B Beta on Fireworks AI

Zephyr · Hugging Face H4

Provisioned

Pricing

TypePrice (per 1M)
Input tokens$0.20
Output tokens$0.20

Capabilities

VisionMultimodalReasoningFunction CallingTool UseJSON ModeCode Execution

About Zephyr 7B Beta

Zephyr 7B Beta is a 7-billion parameter large language model, fine-tuned from the Mistral-7B-v0.1 model. It is tailored to serve as an effective virtual assistant, performing well in generating human-like responses. The model's training involved Direct Preference Optimization (DPO) on a combination of publicly available and synthetic datasets, achieving strong performance on benchmarks like MT-Bench and AlpacaEval, especially for conversational tasks. However, its complexity falls short when compared to proprietary models, especially in tasks involving coding and mathematics. A notable limitation is its insufficient alignment to human safety preferences and the absence of in-the-loop filtering to prevent problematic outputs. Zephyr 7B Beta is English-based and carries an MIT license.

Get Started

Model Specs

Released2023-10-26
Parameters7B
ArchitectureDecoder Only

Related Models on Fireworks AI