LLM ReferenceLLM Reference
OpenRouter

DeepSeek V4 Pro on OpenRouter

DeepSeek V4 · DeepSeek

ServerlessOpen Source

Compare DeepSeek V4 Pro Across Providers

ProviderInput (per 1M)Output (per 1M)
DeepSeek Platform$1.74$3.48
Fireworks AI$1.74$3.48
OpenRouter$0.44$0.87

Pricing

TypePrice (per 1M)
Input tokens$0.44
Output tokens$0.87

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

About DeepSeek V4 Pro

DeepSeek V4 Pro is the flagship 1.6T parameter (49B activated) Mixture-of-Experts language model with 1M-token context. Features hybrid attention (CSA+HCA) requiring only 27% of inference FLOPs vs DeepSeek-V3.2 at 1M context, Manifold-Constrained Hyper-Connections (mHC), and Muon Optimizer for training stability. Achieves 93.5% on LiveCodeBench, 89.8% on IMOAnswerBench, and 90.1% on MMLU. Supports Non-Think, Think High, and Think Max reasoning modes. Pricing: $1.74/1M input, $3.48/1M output (cache hit: $0.145/1M input). MIT licensed. Pricing note: DeepSeek API docs state that deepseek-v4-pro is currently offered at a 75% discount, extended until 2026/05/31 15:59 UTC.

Get Started

Model Specs

Released2026-04-24
Parameters1.6T
Context1M
ArchitectureMixture of Experts

Related Models on OpenRouter