LLM ReferenceLLM Reference
Microsoft Foundry

Qwen2 72B on Microsoft Foundry

Qwen2 · Alibaba

Provisioned

Compare Qwen2 72B Across Providers

ProviderInput (per 1M)Output (per 1M)
Fireworks AI$0.90$0.90
DeepInfra$0.45$0.65
Together AI$0.90$0.90
Microsoft Foundry$1.00$2.00

Pricing

TypePrice (per 1M)
Input tokens$1.00
Output tokens$2.00

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

About Qwen2 72B

Qwen2-72B is a cutting-edge large language model developed by Alibaba's Qwen team, featuring an impressive 72 billion parameters based on the Transformer architecture 12. It employs advanced enhancements such as SwiGLU activation, attention QKV bias, and group query attention to advance efficiency and precision 16. The model demonstrates strong performance across diverse benchmarks, excelling in language understanding, generation, coding, mathematics, and multilingual tasks, often surpassing other open-source models and challenging proprietary alternatives 34. With support for processing up to 128,000 tokens in context and proficiency in around 30 languages, it offers extensive input capabilities 15. However, the base model is not optimal for direct text generation; post-training techniques are advisable for specific applications 16.

Get Started

Model Specs

Released2024-06-05
Parameters72.71B
Context128K
ArchitectureDecoder Only

Related Models on Microsoft Foundry