LLM Reference
Perplexity Labs

Mixtral 8x7B on Perplexity Labs

Mixtral · MistralAI

Serverless

Pricing

TypePrice (per 1M)
Input tokens$0.60
Output tokens$0.60

Capabilities

VisionMultimodalReasoningFunction CallingTool UseJSON ModeCode Execution

About Mixtral 8x7B

Mixtral 8x7B, developed by Mistral AI, features a cutting-edge Mixture of Experts (MoE) architecture, utilizing eight experts with seven billion parameters each, yielding a total of 46.7 billion parameters. This architecture activates only two experts per token, allowing for efficient processing and a 6x faster inference rate compared to Llama 2 70B. The model excels in performance, surpassing Llama 2 70B and competing with GPT-3.5 on numerous benchmarks. It supports multiple languages and can handle context up to 32,000 tokens, enhancing understanding of lengthy text. Designed for diverse tasks, it is strong in code generation and available under a permissive Apache 2.0 license, promoting community engagement. Compatible with various optimization tools, its weights are easily deployable, with Mistral AI continuing to improve its capabilities through performance optimizations and fine-tuning efforts.

Get Started