LLM Reference

Mixtral 8x7B

About

Mixtral 8x7B, developed by Mistral AI, features a cutting-edge Mixture of Experts (MoE) architecture, utilizing eight experts with seven billion parameters each, yielding a total of 46.7 billion parameters. This architecture activates only two experts per token, allowing for efficient processing and a 6x faster inference rate compared to Llama 2 70B. The model excels in performance, surpassing Llama 2 70B and competing with GPT-3.5 on numerous benchmarks. It supports multiple languages and can handle context up to 32,000 tokens, enhancing understanding of lengthy text. Designed for diverse tasks, it is strong in code generation and available under a permissive Apache 2.0 license, promoting community engagement. Compatible with various optimization tools, its weights are easily deployable, with Mistral AI continuing to improve its capabilities through performance optimizations and fine-tuning efforts.

Capabilities

VisionMultimodalReasoningFunction CallingTool UseJSON ModeCode Execution

Providers(17)

Compare all →
ProviderInput (per 1M)Output (per 1M)Type
Databricks Foundation Model Serving$0.5$1Serverless
NVIDIA NIM$0$0Provisioned
GCP Vertex AIServerless
AWS Bedrock$0.45$0.7Serverless
OctoAI API$0.45$0.45Serverless
Fireworks AI$0.5$0.5Serverless
Mistral AI Le Plateforme$0.45$0.7Serverless
Baseten APIServerless
Lepton AI API$0.30$0.30Serverless
Replicate API$0.20$1.00Serverless
Azure OpenAI$0.27$0.27Provisioned
Alibaba Cloud PAI-EASServerless
Perplexity Labs$0.60$0.60Serverless
IBM watsonx$0.6$0.6Serverless
Scale AI GenAI PlatformServerless
DeepInfra$54$54Serverless
Bitdeer AI$0.18$0.54Serverless

Benchmark Scores(4)

BenchmarkScoreVersionSource
Google-Proof Q&A54.8diamondOpen LLM Leaderboard
HellaSwag90.910-shotOpen LLM Leaderboard
HumanEval80.5pass@1Open LLM Leaderboard
Massive Multitask Language Understanding80.25-shotOpen LLM Leaderboard

Rankings