All comparisons
HellaSwag 93.8 — HumanEval 84.8 — Massive Multitask Language Understanding 84.0 — Chatbot Arena 1265.0 — BFCL 38.4 77.5 MMLU PRO 69.7 88.9 Massive Multi-discipline Multimodal Understanding — 80.7 SWE-bench Pro — 41.8
Mistral Large 2 vs Claude Opus 4.5
Side-by-side comparison of specifications, capabilities, and pricing.
| Released | 2025-11-25 | 2025-11-01 |
| Context window | 128K | 200K |
| Parameters | 123B | — |
| Architecture | decoder only | decoder only |
| License | True | Proprietary |
| Knowledge cutoff | 2025-07 | 2025-12 |
Capabilities | ||
| Vision | ||
| Multimodal | ||
| Reasoning | ||
| Function calling | ||
| Tool use | ||
| Structured Outputs | ||
| Code execution | ||
Availability | ||
| Providers | ||
Benchmarks