LLM ReferenceLLM Reference
MiniMax

MiniMax M2.5 Highspeed on MiniMax

MiniMax M2 · MiniMax

Serverless

Why use MiniMax M2.5 Highspeed on MiniMax?

MiniMax offers MiniMax M2.5 Highspeed with competitive pricing. MiniMax is a multimodal foundation model and API platform for text, speech, video, image, and music generation with agent tools.

Compare MiniMax M2.5 Highspeed across 2 providers to find the best fit for your use case

Compare MiniMax M2.5 Highspeed Across Providers

ProviderInput (per 1M)Output (per 1M)
MiniMax
Novita AI$0.60$2.40

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

About MiniMax M2.5 Highspeed

MiniMax M2.5 Highspeed is MiniMax's inference-optimized variant of M2.5, released simultaneously in February 2026. It delivers identical intelligence and outputs to standard M2.5 through a specialized inference engine at lower latency. The model supports a 204,800-token context window, 131,072-token max output, function calling, structured output, and reasoning. API model ID: MiniMax-M2.5-highspeed. It is designed for latency-sensitive interactive applications and automated agent pipelines.

FAQ

What is the context window for MiniMax M2.5 Highspeed on MiniMax?

MiniMax M2.5 Highspeed supports a 204,800 token context window on MiniMax.

How does MiniMax compare to other MiniMax M2.5 Highspeed providers?

MiniMax M2.5 Highspeed is available from 2 providers. The cheapest input pricing is $0.6/1M tokens from Novita AI.

What API model ID do I use for MiniMax M2.5 Highspeed on MiniMax?

Use the model ID MiniMax-M2.5-highspeed when calling MiniMax's API.

Who created MiniMax M2.5 Highspeed?

MiniMax M2.5 Highspeed was created by MiniMax as part of the MiniMax M2 model family.

Is MiniMax M2.5 Highspeed open source?

MiniMax M2.5 Highspeed is not open source; the seed data lists it as proprietary.

Get Started

Model Specs

Released2026-02-12
Context205K
ArchitectureDecoder Only

Related Models on MiniMax