LLM Reference

Mistral Large 2 (2407) on Chutes AI

Mistral Large

Serverless

Pricing

TypePrice (per 1M)
Input tokens$0.50
Output tokens$1.50

Capabilities

VisionMultimodalReasoningFunction CallingTool UseJSON ModeCode Execution

About Mistral Large 2 (2407)

Flagship sparse MoE Mistral model (675B total, 41B active) with 256K context and multimodal capabilities. Leads benchmarks in complex reasoning and long-context processing.

Get Started

Model Specs

Released2024-07-23
Parameters123B
Context128K
ArchitectureDecoder Only