WizardCoder Python 34B
About
WizardCoder Python 34B is a large language model (LLM) tailored for code generation and comprehension, primarily focusing on Python. Harnessing a Transformer-based structure with 34 billion parameters, it was refined using the Evol-Instruct method to enhance its instruction-following skills. This model excels in generating accurate and context-aware code, offering functionalities like code generation, completion, summarization, and translation across languages. It has achieved notable performance in benchmarks such as HumanEval, even outperforming certain versions of GPT-4 in specific tests. Despite its strengths, it requires significant computational resources, such as at least 32GB of RAM, for optimal performance and has different quantization levels to balance accuracy and resource needs 146.
Capabilities
Providers(2)
| Provider | Input (per 1M) | Output (per 1M) | Type | |
|---|---|---|---|---|
| Together AI API | $0.8 | $0.8 | Serverless | |
| Replicate API | — | — | Serverless |