All comparisons
HellaSwag 95.7 — HumanEval 85.5 96.7 Massive Multitask Language Understanding 88.5 — LiveCodeBench 49.6 79.1 Aider Polyglot 48.4 81.3 BigCodeBench 50.0 — Chatbot Arena 1302.0 1412.0 MMLU PRO 75.9 — SWE-bench Verified — 71.7 Google-Proof Q&A — 87.7 Massive Multi-discipline Multimodal Understanding — 82.9
DeepSeek V3 vs o3
Side-by-side comparison of specifications, capabilities, and pricing.
| Released | 2024-12-26 | 2025-03-31 |
| Context window | 64k | 128K |
| Parameters | 671B | — |
| Architecture | mixture of experts | decoder only |
| License | Open Source | Unknown |
| Knowledge cutoff | 2024-04 | — |
Capabilities | ||
| Vision | ||
| Multimodal | ||
| Reasoning | ||
| Function calling | ||
| Tool use | ||
| Structured Outputs | ||
| Code execution | ||
Availability | ||
| Providers | ||
Benchmarks