Pricing
| Type | Price (per 1M) |
|---|---|
| Input tokens | $0.80 |
| Output tokens | $0.80 |
Capabilities
About WizardCoder Python 34B
WizardCoder Python 34B is a large language model (LLM) tailored for code generation and comprehension, primarily focusing on Python. Harnessing a Transformer-based structure with 34 billion parameters, it was refined using the Evol-Instruct method to enhance its instruction-following skills. This model excels in generating accurate and context-aware code, offering functionalities like code generation, completion, summarization, and translation across languages. It has achieved notable performance in benchmarks such as HumanEval, even outperforming certain versions of GPT-4 in specific tests. Despite its strengths, it requires significant computational resources, such as at least 32GB of RAM, for optimal performance and has different quantization levels to balance accuracy and resource needs 146.