LLM ReferenceLLM Reference
Microsoft Foundry

CodeLlama 34B Python on Microsoft Foundry

Code Llama · AI at Meta

ProvisionedOpen Source

Get Started with CodeLlama 34B Python on Microsoft Foundry

Microsoft Foundry offers access to CodeLlama 34B Python with a 100K context window. Microsoft Foundry is a unified enterprise AI platform that significantly expands beyond Azure OpenAI. It functions as a multi-provider hosting and deployment platform for LLMs, supporting models from OpenAI, Anthropic, DeepSeek, xAI, Meta, Mistral, NVIDIA, and others. Foundry integrates agent services, evaluation, observability, and governance into a single Azure control plane. Key capabilities include a multi-provider model catalog, Model Router for intelligent prompt routing, Foundry Agent Service for building and deploying AI agents with built-in tracing and monitoring, and enterprise-grade governance with RBAC, compliance, and regional deployments. For broader model catalog including Claude, DeepSeek, Grok, Llama, Mistral, and NVIDIA Nemotron, Foundry is the recommended platform over Azure OpenAI.

Pricing

TypePrice (per 1M)
Input tokens$1.54
Output tokens$1.77

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

About CodeLlama 34B Python

CodeLlama 34B Python is a specialized code generation model released by Meta on August 24, 2023. With 34 billion parameters, it excels in Python-specific tasks like code completion, infilling, and instruction following. This model offers AI engineers a powerful tool for enhancing coding workflows and productivity. Its architecture is optimized for understanding and generating complex code structures, making it particularly useful for software development tasks. The model is available in the Hugging Face Transformers format, facilitating easy integration into existing projects .

Model Specs

Released2023-08-24
Parameters34B
Context100K
ArchitectureDecoder Only
Knowledge cutoff2024-03