LLM ReferenceLLM Reference

Codestral 2501

codestral-2501

Researched today
ProprietaryLong context

Codestral 2501 is worth evaluating for long context when its provider route and context window match the workload.

Decision context: Long context task fit, 1 tracked provider route, and research from 2026-05-16.

Use it for

  • Teams evaluating long context
  • Workloads that can use a 262K context window
  • Buyers comparing 1 tracked provider route

Do not use it for

  • Vision or document-understanding workloads
  • Strict JSON or tool-calling flows

Cheapest output

$0.900

Microsoft Foundry per 1M tokens

Provider routes

1

Tracked API hosts

Quality / dollar

Unknown

No task benchmark coverage yet

Freshness

2026-05-16

Researched today

fresh

Top use-case fit

Long context

Included by capability and metadata signals in the decision map.

Provider price ladder

ProviderInput / 1MOutput / 1MRoute
Microsoft Foundry$0.300$0.900
Serverless

Benchmark peer barsfor Long context

No task-mapped benchmark peers are available for this model yet.

Migration checks

No linked migration route is available for this model yet.

About

From partners/community. Code generation model with very long context: 262,144 tokens. Output 4,096 tokens. Tool calling: No. English only.

Codestral 2501 has a 256K-token context window.

Codestral 2501 input tokens at $0.3/1M, output at $0.9/1M.

Capabilities

No model capability flags are currently sourced.

Rankings

Specifications

FamilyCodestral
Released2025-01-01
Context262K
Max output4,096

Created by

Enterprise AI solutions for trust and transparency.

Paris, France
Founded 2023
Website

Providers(1)