LLM Reference
GCP Vertex AI

Mixtral 8x7B on GCP Vertex AI

Mixtral · MistralAI

Serverless

Capabilities

VisionMultimodalReasoningFunction CallingTool UseJSON ModeCode Execution

About Mixtral 8x7B

Mixtral 8x7B, developed by Mistral AI, features a cutting-edge Mixture of Experts (MoE) architecture, utilizing eight experts with seven billion parameters each, yielding a total of 46.7 billion parameters. This architecture activates only two experts per token, allowing for efficient processing and a 6x faster inference rate compared to Llama 2 70B. The model excels in performance, surpassing Llama 2 70B and competing with GPT-3.5 on numerous benchmarks. It supports multiple languages and can handle context up to 32,000 tokens, enhancing understanding of lengthy text. Designed for diverse tasks, it is strong in code generation and available under a permissive Apache 2.0 license, promoting community engagement. Compatible with various optimization tools, its weights are easily deployable, with Mistral AI continuing to improve its capabilities through performance optimizations and fine-tuning efforts.

Get Started

Model Specs

Released2023-12-11
Parameters8x7B
Context32K
ArchitectureMixture of Experts
Knowledge cutoff2023-12

Provider

GCP Vertex AI
GCP Vertex AI

Google Cloud Platform (GCP)

All models on GCP Vertex AI