Get Started with CodeLlama 34B on Microsoft Foundry
Microsoft Foundry offers access to CodeLlama 34B with a 100K context window. Microsoft Foundry is a unified enterprise AI platform that significantly expands beyond Azure OpenAI. It functions as a multi-provider hosting and deployment platform for LLMs, supporting models from OpenAI, Anthropic, DeepSeek, xAI, Meta, Mistral, NVIDIA, and others. Foundry integrates agent services, evaluation, observability, and governance into a single Azure control plane. Key capabilities include a multi-provider model catalog, Model Router for intelligent prompt routing, Foundry Agent Service for building and deploying AI agents with built-in tracing and monitoring, and enterprise-grade governance with RBAC, compliance, and regional deployments. For broader model catalog including Claude, DeepSeek, Grok, Llama, Mistral, and NVIDIA Nemotron, Foundry is the recommended platform over Azure OpenAI.
Pricing
| Type | Price (per 1M) |
|---|---|
| Input tokens | $1.54 |
| Output tokens | $1.77 |
Capabilities
About CodeLlama 34B
CodeLlama 34B is a powerful generative text model developed by Meta, specifically tailored for code synthesis and understanding. With 34 billion parameters, it excels in code completion, infilling, and instruction following, particularly for Python programming. The model utilizes an auto-regressive transformer architecture and has been trained on a diverse dataset of programming languages, making it versatile for various coding tasks. Designed for both commercial and research applications, CodeLlama 34B offers AI engineers a robust tool for integrating advanced code generation capabilities into their projects. More details can be found on the model's Hugging Face page .