LLM ReferenceLLM Reference

Dolphin 2.5 Mixtral 8x7B

About

The Dolphin 2.5 Mixtral 8x7B is a sophisticated large language model designed primarily for coding tasks, known for its proficiency across diverse programming languages including Kotlin. It utilizes the Mixtral-8x7b architecture and has been fine-tuned on datasets like Dolphin-Coder and MagiCoder, employing qLoRA and Axolotl during training. Featuring a 16k context window for fine-tuning and a base context window of 32k, it offers powerful yet uncensored capabilities, allowing it to handle a wide range of prompts, albeit this introduces ethical considerations. The model is available in various formats on platforms like Hugging Face, catering to different needs with options such as GGUF and GPTQ quantization levels. Despite its strengths, users should be mindful of ethical sensitivities and implement alignment measures when deploying it publicly.

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

Providers(1)

ProviderInput (per 1M)Output (per 1M)Type
Together AI$0.6$0.6Serverless

Benchmark Scores(4)

BenchmarkScoreVersionSource
Google-Proof Q&A44.8diamondOpen LLM Leaderboard
HellaSwag89.010-shotOpen LLM Leaderboard
HumanEval67.9pass@1Open LLM Leaderboard
Massive Multitask Language Understanding71.25-shotOpen LLM Leaderboard

Rankings

Specifications

FamilyDolphin
Released2023-12-18
Parameters8x7B
ArchitectureMixture of Experts
Knowledge cutoff2023-12
Specializationgeneral
Trainingfinetuning

Created by

Uncensored AI models for open access

N/A
Founded N/A
Website

Providers(1)