LLM ReferenceLLM Reference
Microsoft Foundry

Qwen2 72B on Microsoft Foundry

Qwen2 · Alibaba

Provisioned

Get Started with Qwen2 72B on Microsoft Foundry

Microsoft Foundry offers access to Qwen2 72B with a 128K context window. Microsoft Foundry is a unified enterprise AI platform that significantly expands beyond Azure OpenAI. It functions as a multi-provider hosting and deployment platform for LLMs, supporting models from OpenAI, Anthropic, DeepSeek, xAI, Meta, Mistral, NVIDIA, and others. Foundry integrates agent services, evaluation, observability, and governance into a single Azure control plane. Key capabilities include a multi-provider model catalog, Model Router for intelligent prompt routing, Foundry Agent Service for building and deploying AI agents with built-in tracing and monitoring, and enterprise-grade governance with RBAC, compliance, and regional deployments. For broader model catalog including Claude, DeepSeek, Grok, Llama, Mistral, and NVIDIA Nemotron, Foundry is the recommended platform over Azure OpenAI.

Pricing

TypePrice (per 1M)
Input tokens$1.00
Output tokens$2.00

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

About Qwen2 72B

Qwen2-72B is a cutting-edge large language model developed by Alibaba's Qwen team, featuring an impressive 72 billion parameters based on the Transformer architecture 12. It employs advanced enhancements such as SwiGLU activation, attention QKV bias, and group query attention to advance efficiency and precision 16. The model demonstrates strong performance across diverse benchmarks, excelling in language understanding, generation, coding, mathematics, and multilingual tasks, often surpassing other open-source models and challenging proprietary alternatives 34. With support for processing up to 128,000 tokens in context and proficiency in around 30 languages, it offers extensive input capabilities 15. However, the base model is not optimal for direct text generation; post-training techniques are advisable for specific applications 16.

Model Specs

Released2024-06-05
Parameters72.71B
Context128K
ArchitectureDecoder Only

Related Models on Microsoft Foundry