LLM ReferenceLLM Reference

Zephyr 7B Beta

About

Zephyr 7B Beta is a 7-billion parameter large language model, fine-tuned from the Mistral-7B-v0.1 model. It is tailored to serve as an effective virtual assistant, performing well in generating human-like responses. The model's training involved Direct Preference Optimization (DPO) on a combination of publicly available and synthetic datasets, achieving strong performance on benchmarks like MT-Bench and AlpacaEval, especially for conversational tasks. However, its complexity falls short when compared to proprietary models, especially in tasks involving coding and mathematics. A notable limitation is its insufficient alignment to human safety preferences and the absence of in-the-loop filtering to prevent problematic outputs. Zephyr 7B Beta is English-based and carries an MIT license.

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

Providers(2)

Compare all →
ProviderInput (per 1M)Output (per 1M)Type
Fireworks AI$0.20$0.20Provisioned
Replicate API$0.05$0.25Serverless

Benchmark Scores(4)

BenchmarkScoreVersionSource
Google-Proof Q&A47.3diamondresearch
HellaSwag88.110-shotresearch
HumanEval67.8pass@1research
Massive Multitask Language Understanding71.45-shotresearch

Rankings

Specifications

FamilyZephyr
Released2023-10-26
Parameters7B
ArchitectureDecoder Only
Specializationgeneral
Trainingfinetuning

Created by

Community-driven open-source AI model hub

New York City, New York, United States
Founded 2016
Website