LLM ReferenceLLM Reference
MiniMax

MiniMax Models — Pricing & Benchmarks

2 models available

MiniMax hosts 2 AI models in this catalog. Per-token pricing is not listed for these MiniMax rows yet; compare context windows, benchmarks, and hosting options instead. LLM Reference lets you compare these models across all 63 providers without switching tabs.

ModelInput (per 1M)Output (per 1M)Context
MiniMax M2.5 Highspeed205K
MiniMax M2.7 Highspeed205K