LLM Reference

Zephyr 7B Beta

About

Zephyr 7B Beta is a 7-billion parameter large language model, fine-tuned from the Mistral-7B-v0.1 model. It is tailored to serve as an effective virtual assistant, performing well in generating human-like responses. The model's training involved Direct Preference Optimization (DPO) on a combination of publicly available and synthetic datasets, achieving strong performance on benchmarks like MT-Bench and AlpacaEval, especially for conversational tasks. However, its complexity falls short when compared to proprietary models, especially in tasks involving coding and mathematics. A notable limitation is its insufficient alignment to human safety preferences and the absence of in-the-loop filtering to prevent problematic outputs. Zephyr 7B Beta is English-based and carries an MIT license.

Capabilities

MultimodalFunction CallingTool UseJSON Mode

Providers(2)

ProviderInput (per 1M)Output (per 1M)Type
Replicate API
Serverless
Fireworks AI Platform
Provisioned

Specifications

FamilyZephyr
Parameters7B
ArchitectureDecoder Only
Specializationgeneral