LLM ReferenceLLM Reference

Mistral Large 2 (2407)

About

Flagship sparse MoE Mistral model (675B total, 41B active) with 256K context and multimodal capabilities. Leads benchmarks in complex reasoning and long-context processing.

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

Providers(3)

Compare all →
ProviderInput (per 1M)Output (per 1M)Type
Microsoft Foundry$3$9Serverless
Chutes AI$0.5$1.5Serverless
SiliconFlow$2$2Serverless

Rankings

Specifications

FamilyMistral
Released2024-07-23
Parameters123B
Context128K
ArchitectureDecoder Only
Specializationgeneral
Trainingfinetuning

Created by

Enterprise AI solutions for trust and transparency.

Paris, France
Founded 2023
Website