LLM Reference

Dolphin 2.6 Mixtral 8x7B

About

Dolphin 2.6 Mixtral 8x7B is a large language model fine-tuned from the Mixtral-8x7B base, known for its robust coding abilities and high compliance with user prompts. Despite not being tuned with Direct Preference Optimization, it performs exceptionally well in coding tasks due to extensive training with coding datasets, including MagiCoder. The model's architecture features a context window reduced to 16k, and training was carried out using techniques like qLoRA. However, it is uncensored, exposing potential ethical concerns and prompting caution for deployment without additional safeguards. Quantized versions are available to accommodate different hardware needs, and users may encounter variation in performance with larger context windows.

Capabilities

MultimodalFunction CallingTool UseJSON Mode

Providers(3)

ProviderInput (per 1M)Output (per 1M)Type
deepinfra API
Serverless
Lepton AI API
Serverless
Fireworks AI Platform
Provisioned

Specifications

FamilyDolphin
Parameters8x7B
ArchitectureMixture of Experts
Specializationgeneral