CodeLlama 70B Python
codellama-70b-python
About
CodeLlama 70B Python is a specialized AI model by Meta, designed for Python code synthesis and understanding. With 70 billion parameters, it excels in code completion, infilling, and instruction following tasks. The model leverages an optimized transformer architecture and has been fine-tuned with up to 16,000 tokens, making it particularly effective for Python-centric development workflows. While it doesn't support long contexts of 100,000 tokens, it offers powerful capabilities for both commercial and research applications in Python programming environments. More details can be found in the research paper "Code Llama: Open Foundation Models for Code" .
CodeLlama 70B Python has a 16K-token context window.
CodeLlama 70B Python input tokens at $0.65/1M, output at $2.75/1M.
Capabilities
Providers(4)
Compare all →| Provider | Input (per 1M) | Output (per 1M) | Type | |
|---|---|---|---|---|
| Together AI | $0.9 | $0.9 | Serverless | |
| Fireworks AI | $0.90 | $0.90 | Provisioned | |
| Microsoft Foundry | $3.78 | $11.34 | Provisioned | |
| Replicate API | $0.65 | $2.75 | Serverless |
Specifications
Created by
Large-scale open-source AI for social technologies.