Get Started with CodeLlama 13B Python on Microsoft Foundry
Microsoft Foundry offers access to CodeLlama 13B Python with a 100K context window. Microsoft Foundry is a unified enterprise AI platform that significantly expands beyond Azure OpenAI. It functions as a multi-provider hosting and deployment platform for LLMs, supporting models from OpenAI, Anthropic, DeepSeek, xAI, Meta, Mistral, NVIDIA, and others. Foundry integrates agent services, evaluation, observability, and governance into a single Azure control plane. Key capabilities include a multi-provider model catalog, Model Router for intelligent prompt routing, Foundry Agent Service for building and deploying AI agents with built-in tracing and monitoring, and enterprise-grade governance with RBAC, compliance, and regional deployments. For broader model catalog including Claude, DeepSeek, Grok, Llama, Mistral, and NVIDIA Nemotron, Foundry is the recommended platform over Azure OpenAI.
Pricing
| Type | Price (per 1M) |
|---|---|
| Input tokens | $0.81 |
| Output tokens | $0.94 |
Capabilities
About CodeLlama 13B Python
CodeLlama 13B Python is a specialized variant of the CodeLlama family, featuring 13 billion parameters and optimized for Python programming tasks. This model excels at code synthesis, completion, and infilling, while also supporting instruction following and chat-based interactions. Leveraging advanced transformer architecture and trained on a diverse dataset, it offers robust understanding of programming concepts and syntax. Designed for AI engineers and developers, it serves as a powerful tool for integrating AI into coding workflows, making it particularly valuable for Python-related applications .