All comparisons
HellaSwag 95.7 — HumanEval 85.5 — Massive Multitask Language Understanding 88.5 — LiveCodeBench 49.6 — Aider Polyglot 48.4 — BigCodeBench 50.0 — Chatbot Arena 1302.0 — MMLU PRO 75.9 88.9 Massive Multi-discipline Multimodal Understanding — 80.7 BFCL — 77.5 SWE-bench Pro — 41.8
DeepSeek V3 vs Claude Opus 4.5
Side-by-side comparison of specifications, capabilities, and pricing.
| Released | 2024-12-26 | 2025-11-01 |
| Context window | 64k | 200K |
| Parameters | 671B | — |
| Architecture | mixture of experts | decoder only |
| License | Open Source | Proprietary |
| Knowledge cutoff | 2024-04 | 2025-12 |
Capabilities | ||
| Vision | ||
| Multimodal | ||
| Reasoning | ||
| Function calling | ||
| Tool use | ||
| Structured Outputs | ||
| Code execution | ||
Availability | ||
| Providers | ||
Benchmarks