Using DeepSeek V4 Flash on OpenRouter
Implementation guide · DeepSeek V4 · DeepSeek
Quick Start
- 1
- 2Use the OpenRouter SDK or REST API to call
deepseek/deepseek-v4-flash— see the documentation for request format. - 3
Code Examples
About OpenRouter
OpenRouter provides a unified interface for Large Language Models with better pricing, improved uptime, and no subscription requirements. Route across providers for cost optimization and reliability.
Multi-provider LLM aggregator offering unified API access to 300+ models from all major labs and emerging providers, with automatic failover for reliability.
Pricing on OpenRouter
| Type | Price (per 1M) |
|---|---|
| Input tokens | $0.14 |
| Output tokens | $0.28 |
Capabilities
About DeepSeek V4 Flash
DeepSeek V4 Flash is a 284B parameter (13B activated) Mixture-of-Experts language model with 1M-token context. Features a hybrid attention architecture combining Compressed Sparse Attention (CSA) and Heavily Compressed Attention (HCA) for efficient long-context inference. Supports thinking and non-thinking modes. Legacy API aliases deepseek-chat and deepseek-reasoner map to this model's non-thinking and thinking modes respectively. Pricing: $0.14/1M input, $0.28/1M output (cache hit: $0.028/1M input). MIT licensed.