LLM Reference
Fireworks AI

DeepSeek Coder V2 Lite on Fireworks AI

DeepSeek Coder V2 · DeepSeek

ServerlessProvisioned

Pricing

TypePrice (per 1M)
Input tokens$0.50
Output tokens$0.50

Capabilities

VisionMultimodalReasoningFunction CallingTool UseJSON ModeCode Execution

About DeepSeek Coder V2 Lite

DeepSeek Coder V2 Lite is an open-source Mixture-of-Experts (MoE) language model specifically tailored for efficiency and cost-effectiveness in coding tasks. It operates with a 15.7B parameter count, but only 2.4B are active at any given time, making it comparable to GPT4-Turbo for code-centric applications. This model supports 338 programming languages and has an extended context length of 128K tokens, facilitating the handling of complex codebases and lengthy prompts. Its features encompass code generation, completion, understanding, and mathematical reasoning, making it versatile for diverse coding applications. Available on Hugging Face, Ollama, and other platforms, DeepSeek Coder V2 Lite offers accessible solutions for developers and researchers, with performance that rivals or surpasses some closed-source models.

Get Started

Model Specs

Released2024-06-17
Parameters16B
Context128K
ArchitectureMixture of Experts