LLM ReferenceLLM Reference
Microsoft Foundry

Mixtral 8x22B v0.1 on Microsoft Foundry

Mixtral · MistralAI

Provisioned

Get Started with Mixtral 8x22B v0.1 on Microsoft Foundry

Microsoft Foundry offers access to Mixtral 8x22B v0.1 with a 64K context window. Microsoft Foundry is a unified enterprise AI platform that significantly expands beyond Azure OpenAI. It functions as a multi-provider hosting and deployment platform for LLMs, supporting models from OpenAI, Anthropic, DeepSeek, xAI, Meta, Mistral, NVIDIA, and others. Foundry integrates agent services, evaluation, observability, and governance into a single Azure control plane. Key capabilities include a multi-provider model catalog, Model Router for intelligent prompt routing, Foundry Agent Service for building and deploying AI agents with built-in tracing and monitoring, and enterprise-grade governance with RBAC, compliance, and regional deployments. For broader model catalog including Claude, DeepSeek, Grok, Llama, Mistral, and NVIDIA Nemotron, Foundry is the recommended platform over Azure OpenAI.

Pricing

TypePrice (per 1M)
Input tokens$2.00
Output tokens$6.00

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

About Mixtral 8x22B v0.1

The Mixtral 8x22B v0.1 is a pretrained generative Sparse Mixture of Experts (MoE) model created by Mistral AI [1][2][4]. It utilizes a specialized architecture where different sub-models, termed "experts," manage distinct input segments, enhancing both efficiency and performance relative to traditional large language models [2][10][12]. This model features an impressive 176 billion parameters and supports a context length of 65,000 tokens [10][13]. It excels in text generation, completion, and question answering, outperforming models like LLaMA 2 70B on various benchmarks [4][5][7]. Nonetheless, as a base model, it lacks inherent moderation capabilities, potentially generating inappropriate or harmful content without filtration [2][4][10]. The model requires significant VRAM—approximately 260GB in FP16 mode and 73GB in INT4 mode—for optimal operation [10][13] and may struggle with complex contextual understanding and current knowledge. Enhanced instruct-tuned versions, such as the Mixtral-8x22B-Instruct-v0.1, address some limitations by improving instruction adherence [3][5][6].

Model Specs

Released2024-04-17
Parameters8x22B
Context64K
ArchitectureMixture of Experts

Related Models on Microsoft Foundry

GPU-Hour Providers(1)