Using DeepSeek V4 Pro on OpenRouter
Implementation guide · DeepSeek V4 · DeepSeek
Quick Start
- 1
- 2Use the OpenRouter SDK or REST API to call
deepseek/deepseek-v4-pro— see the documentation for request format. - 3
Code Examples
About OpenRouter
OpenRouter provides a unified interface for Large Language Models with better pricing, improved uptime, and no subscription requirements. Route across providers for cost optimization and reliability.
Multi-provider LLM aggregator offering unified API access to 300+ models from all major labs and emerging providers, with automatic failover for reliability.
Pricing on OpenRouter
| Type | Price (per 1M) |
|---|---|
| Input tokens | $0.44 |
| Output tokens | $0.87 |
Capabilities
About DeepSeek V4 Pro
DeepSeek V4 Pro is the flagship 1.6T parameter (49B activated) Mixture-of-Experts language model with 1M-token context. Features hybrid attention (CSA+HCA) requiring only 27% of inference FLOPs vs DeepSeek-V3.2 at 1M context, Manifold-Constrained Hyper-Connections (mHC), and Muon Optimizer for training stability. Achieves 93.5% on LiveCodeBench, 89.8% on IMOAnswerBench, and 90.1% on MMLU. Supports Non-Think, Think High, and Think Max reasoning modes. Pricing: $1.74/1M input, $3.48/1M output (cache hit: $0.145/1M input). MIT licensed. Pricing note: DeepSeek API docs state that deepseek-v4-pro is currently offered at a 75% discount, extended until 2026/05/31 15:59 UTC.