LLM ReferenceLLM Reference

GLM-5 vs Mistral Large 3 675B Instruct

GLM-5 (2026) and Mistral Large 3 675B Instruct (2025) are frontier reasoning models from Zhipu AI and MistralAI. GLM-5 ships a 200k-token context window, while Mistral Large 3 675B Instruct ships a 128K-token context window. On τ-bench, GLM-5 leads by 11.9 pts. On pricing, Mistral Large 3 675B Instruct costs $0.5/1M input tokens versus $0.72/1M for the alternative. This comparison covers specs, pricing, capabilities, benchmarks, provider availability, and production fit.

Mistral Large 3 675B Instruct is ~44% cheaper at $0.5/1M; pay for GLM-5 only for reasoning depth.

Specs

Released2026-02-112025-12-01
Context window200k128K
Parameters744B total, 40B active675B
Architecturemixture of expertsdecoder only
LicenseMIT1
Knowledge cutoff--

Pricing and availability

GLM-5Mistral Large 3 675B Instruct
Input price$0.72/1M tokens$0.5/1M tokens
Output price$2.3/1M tokens$1.5/1M tokens
Providers

Capabilities

GLM-5Mistral Large 3 675B Instruct
Vision
Multimodal
Reasoning
Function calling
Tool use
Structured outputs
Code execution

Benchmarks

BenchmarkGLM-5Mistral Large 3 675B Instruct
τ-bench82.170.2

Deep dive

On shared benchmark coverage, τ-bench has GLM-5 at 82.1 and Mistral Large 3 675B Instruct at 70.2, with GLM-5 ahead by 11.9 points. The largest visible gap is 11.9 points on τ-bench, which matters most when that benchmark mirrors your workload. Treat isolated benchmark wins as directional, because provider routing, prompt style, and tool access can move real application results.

The capability footprint differs most on reasoning mode: GLM-5, function calling: GLM-5, and tool use: GLM-5. Both models share structured outputs, so the practical split is not just feature count. Use those differences to decide whether the page is about raw model quality, agentic coding support, multimodal ingestion, or predictable structured API behavior.

For cost, GLM-5 lists $0.72/1M input and $2.3/1M output tokens, while Mistral Large 3 675B Instruct lists $0.5/1M input and $1.5/1M output tokens on the cheapest tracked provider. A 70/30 input-output blend puts Mistral Large 3 675B Instruct lower by about $0.39 per million blended tokens. Availability is 5 providers versus 3, so concentration risk also matters.

Choose GLM-5 when reasoning depth, larger context windows, and broader provider choice are central to the workload. Choose Mistral Large 3 675B Instruct when provider fit and lower input-token cost are more important. For production, rerun your own prompts through the exact provider, region, and tool stack you plan to ship.

FAQ

Which has a larger context window, GLM-5 or Mistral Large 3 675B Instruct?

GLM-5 supports 200k tokens, while Mistral Large 3 675B Instruct supports 128K tokens. That gap matters most for long documents, large codebases, retrieval-heavy agents, and conversations where earlier context must remain visible.

Which is cheaper, GLM-5 or Mistral Large 3 675B Instruct?

Mistral Large 3 675B Instruct is cheaper on tracked token pricing. GLM-5 costs $0.72/1M input and $2.3/1M output tokens. Mistral Large 3 675B Instruct costs $0.5/1M input and $1.5/1M output tokens. Provider discounts or batch pricing can still change the final bill.

Is GLM-5 or Mistral Large 3 675B Instruct open source?

GLM-5 is listed under MIT. Mistral Large 3 675B Instruct is listed under 1. License labels affect whether you can self-host, redistribute weights, or rely only on hosted APIs, so confirm the upstream license before deployment.

Which is better for reasoning mode, GLM-5 or Mistral Large 3 675B Instruct?

GLM-5 has the clearer documented reasoning mode signal in this comparison. If reasoning mode is mission-critical, validate it against the provider endpoint because model-level support and API-level exposure can differ.

Which is better for function calling, GLM-5 or Mistral Large 3 675B Instruct?

GLM-5 has the clearer documented function calling signal in this comparison. If function calling is mission-critical, validate it against the provider endpoint because model-level support and API-level exposure can differ.

Where can I run GLM-5 and Mistral Large 3 675B Instruct?

GLM-5 is available on Fireworks AI, OpenRouter, Together AI, GCP Vertex AI, and NVIDIA NIM. Mistral Large 3 675B Instruct is available on AWS Bedrock, NVIDIA NIM, and Mistral AI Studio. Provider coverage can affect latency, region availability, compliance posture, and fallback options.

Continue comparing

Last reviewed: 2026-04-24. Data sourced from public model cards and provider documentation.