LLM Reference

Zephyr 7B Gemma

About

The Zephyr 7B Gemma, a large language model by Hugging Face, boasts a 7 billion parameter design from the Gemma series. Fine-tuned from the google/gemma-7b model, it utilizes Direct Preference Optimization for training with both publicly available and synthetic datasets. It is proficient in tasks such as text generation, question answering, and conversational AI, making it ideal for chatbots and virtual assistants. However, it lacks safety alignment from the reinforcement learning with human feedback phase, which may lead to problematic outputs. The unspecified training data composition also suggests potential biases in its responses 123.

Capabilities

MultimodalFunction CallingTool UseJSON Mode

Specifications

FamilyZephyr
Parameters7B
ArchitectureDecoder Only
Specializationgeneral