LLM ReferenceLLM Reference
Microsoft Foundry

Llama 3.1 70B Instruct on Microsoft Foundry

Llama 3.1 · AI at Meta

ProvisionedOpen Source

Get Started with Llama 3.1 70B Instruct on Microsoft Foundry

Microsoft Foundry offers access to Llama 3.1 70B Instruct with a 128K context window. Microsoft Foundry is a unified enterprise AI platform that significantly expands beyond Azure OpenAI. It functions as a multi-provider hosting and deployment platform for LLMs, supporting models from OpenAI, Anthropic, DeepSeek, xAI, Meta, Mistral, NVIDIA, and others. Foundry integrates agent services, evaluation, observability, and governance into a single Azure control plane. Key capabilities include a multi-provider model catalog, Model Router for intelligent prompt routing, Foundry Agent Service for building and deploying AI agents with built-in tracing and monitoring, and enterprise-grade governance with RBAC, compliance, and regional deployments. For broader model catalog including Claude, DeepSeek, Grok, Llama, Mistral, and NVIDIA Nemotron, Foundry is the recommended platform over Azure OpenAI.

Pricing

TypePrice (per 1M)
Input tokens$2.68
Output tokens$3.54

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

About Llama 3.1 70B Instruct

The Llama 3.1 70B Instruct model is a cutting-edge large language model with 70 billion parameters, designed for instruction-following tasks. It features multilingual capabilities, supporting languages like English, German, French, and others. Fine-tuned using supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF), it excels in understanding and responding to user instructions. The model can handle a context length of up to 128k tokens, making it suitable for complex dialogue systems and applications requiring detailed responses. It outperforms many existing open-source and proprietary models on various industry benchmarks, making it ideal for conversational AI, content generation, and data synthesis tasks. For more details, visit the Hugging Face page [1].