LLM ReferenceLLM Reference
Microsoft Foundry

Llama 3.1 405B Instruct on Microsoft Foundry

Llama 3.1 · AI at Meta

ProvisionedOpen Source

Get Started with Llama 3.1 405B Instruct on Microsoft Foundry

Microsoft Foundry offers access to Llama 3.1 405B Instruct with a 128K context window. Microsoft Foundry is a unified enterprise AI platform that significantly expands beyond Azure OpenAI. It functions as a multi-provider hosting and deployment platform for LLMs, supporting models from OpenAI, Anthropic, DeepSeek, xAI, Meta, Mistral, NVIDIA, and others. Foundry integrates agent services, evaluation, observability, and governance into a single Azure control plane. Key capabilities include a multi-provider model catalog, Model Router for intelligent prompt routing, Foundry Agent Service for building and deploying AI agents with built-in tracing and monitoring, and enterprise-grade governance with RBAC, compliance, and regional deployments. For broader model catalog including Claude, DeepSeek, Grok, Llama, Mistral, and NVIDIA Nemotron, Foundry is the recommended platform over Azure OpenAI.

Pricing

TypePrice (per 1M)
Input tokens$5.33
Output tokens$16.00

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

About Llama 3.1 405B Instruct

Llama 3.1 405B Instruct is Meta's advanced large language model released on July 23, 2024, featuring 405 billion parameters. It utilizes an optimized transformer architecture with supervised fine-tuning and reinforcement learning for enhanced instruction-following capabilities. The model supports multiple languages, was trained on 15 trillion tokens, and fine-tuned with 25 million synthetic examples. It excels in multilingual dialogue and text generation, making it ideal for assistant-like applications. Llama 3.1 incorporates robust safety measures and ethical considerations, outperforming many existing models on various industry benchmarks. AI engineers can access the model via its Hugging Face page for implementation in diverse NLP tasks.