LLM ReferenceLLM Reference
OpenRouter

Using DeepSeek V4 Pro on OpenRouter

Implementation guide · DeepSeek V4 · DeepSeek

ServerlessOpen Source

Quick Start

  1. 1
    Create an account at OpenRouter and generate an API key.
  2. 2
    Use the OpenRouter SDK or REST API to call deepseek/deepseek-v4-pro — see the documentation for request format.
  3. 3
    You'll be billed $0.44/1M input, $0.87/1M output tokens. See full pricing.

Code Examples

See OpenRouter documentation for integration details.

About OpenRouter

OpenRouter provides a unified interface for Large Language Models with better pricing, improved uptime, and no subscription requirements. Route across providers for cost optimization and reliability.

Multi-provider LLM aggregator offering unified API access to 300+ models from all major labs and emerging providers, with automatic failover for reliability.

Pricing on OpenRouter

TypePrice (per 1M)
Input tokens$0.44
Output tokens$0.87

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

About DeepSeek V4 Pro

DeepSeek V4 Pro is the flagship 1.6T parameter (49B activated) Mixture-of-Experts language model with 1M-token context. Features hybrid attention (CSA+HCA) requiring only 27% of inference FLOPs vs DeepSeek-V3.2 at 1M context, Manifold-Constrained Hyper-Connections (mHC), and Muon Optimizer for training stability. Achieves 93.5% on LiveCodeBench, 89.8% on IMOAnswerBench, and 90.1% on MMLU. Supports Non-Think, Think High, and Think Max reasoning modes. Pricing: $1.74/1M input, $3.48/1M output (cache hit: $0.145/1M input). MIT licensed. Pricing note: DeepSeek API docs state that deepseek-v4-pro is currently offered at a 75% discount, extended until 2026/05/31 15:59 UTC.

Model Specs

Released2026-04-24
Parameters1.6T
Context1M
ArchitectureMixture of Experts

Provider

OpenRouter
OpenRouter

OpenRouter, Inc.

New York, NY, USA