LLM Reference

Qwen2 72B

About

Qwen2-72B is a cutting-edge large language model developed by Alibaba's Qwen team, featuring an impressive 72 billion parameters based on the Transformer architecture 12. It employs advanced enhancements such as SwiGLU activation, attention QKV bias, and group query attention to advance efficiency and precision 16. The model demonstrates strong performance across diverse benchmarks, excelling in language understanding, generation, coding, mathematics, and multilingual tasks, often surpassing other open-source models and challenging proprietary alternatives 34. With support for processing up to 128,000 tokens in context and proficiency in around 30 languages, it offers extensive input capabilities 15. However, the base model is not optimal for direct text generation; post-training techniques are advisable for specific applications 16.

Capabilities

MultimodalFunction CallingTool UseJSON Mode

Providers(4)

ProviderInput (per 1M)Output (per 1M)Type
Fireworks AI Platform$0.9$0.9
Serverless
deepinfra API
Serverless
Together AI API$0.9$0.9
Serverless
Azure OpenAI
Provisioned

Specifications

FamilyQwen2
Released2024-06-05
Parameters72.71B
Context128K
ArchitectureDecoder Only
Specializationgeneral