Why use MiniMax M2.5 Highspeed on Novita AI?
Novita AI offers MiniMax M2.5 Highspeed with pay-as-you-go pricing at $0.60/1M input tokens. Novita AI offers a GPU-based inference API for image, video, and language model generation with a broad catalog of open-source models.
Compare MiniMax M2.5 Highspeed across 2 providers to find the best fit for your use caseCompare MiniMax M2.5 Highspeed Across Providers
Pricing
| Type | Price (per 1M) |
|---|---|
| Input tokens | $0.60 |
| Output tokens | $2.40 |
Capabilities
About MiniMax M2.5 Highspeed
MiniMax M2.5 Highspeed is MiniMax's inference-optimized variant of M2.5, released simultaneously in February 2026. It delivers identical intelligence and outputs to standard M2.5 through a specialized inference engine at lower latency. The model supports a 204,800-token context window, 131,072-token max output, function calling, structured output, and reasoning. API model ID: MiniMax-M2.5-highspeed. It is designed for latency-sensitive interactive applications and automated agent pipelines.
FAQ
What does MiniMax M2.5 Highspeed cost on Novita AI?
On Novita AI, MiniMax M2.5 Highspeed costs $0.6 per 1M input tokens and $2.4 per 1M output tokens.
What is the context window for MiniMax M2.5 Highspeed on Novita AI?
MiniMax M2.5 Highspeed supports a 204,800 token context window on Novita AI.
How does Novita AI compare to other MiniMax M2.5 Highspeed providers?
MiniMax M2.5 Highspeed is available from 2 providers. The cheapest input pricing is $0.6/1M tokens from Novita AI.
What API model ID do I use for MiniMax M2.5 Highspeed on Novita AI?
Use the model ID minimax/minimax-m2.5-highspeed when calling Novita AI's API.
Who created MiniMax M2.5 Highspeed?
MiniMax M2.5 Highspeed was created by MiniMax as part of the MiniMax M2 model family.
Is MiniMax M2.5 Highspeed open source?
MiniMax M2.5 Highspeed is not open source; the seed data lists it as proprietary.