LLM Reference

Mistral Large

About

Mistral Large is a state-of-the-art text generation model known for its advanced reasoning capabilities. Although its architecture specifics are not detailed, it handles complex multilingual reasoning tasks like text understanding, transformation, and code generation. A prominent feature is its 32K token context window, which allows for accurate information recall from lengthy documents. Its strong instruction-following abilities enable developers to tailor moderation policies effectively. Supporting native function calling, it aids in application development and tech stack modernization on a large scale. Despite the lack of architectural details, Mistral Large performs impressively on various benchmarks, positioning it competitively among leading large language models. Subsequent versions, such as 2407, boast further enhancements.

Capabilities

MultimodalFunction CallingTool UseJSON Mode

Providers(6)

ProviderInput (per 1M)Output (per 1M)Type
NVIDIA NIM
Provisioned
Azure OpenAI$4$12
Serverless
AWS Bedrock$4$12
Serverless
Snowflake Cortex$10.2$10.2
Serverless
Mistral AI Le Plateforme
Serverless
IBM watsonx$10$10
Serverless

Specifications

FamilyMistral
Released2024-02-26
Context32K
ArchitectureDecoder Only
Specializationgeneral