LLM ReferenceLLM Reference

Mistral NeMo (2407)

About

Mistral NeMo is a 12B parameter open-source language model developed by Mistral AI, designed for efficient performance and reasoning tasks. With a 128K token context window, it excels at handling long documents and complex reasoning. The model is optimized for fast inference while maintaining strong performance across multiple benchmarks, making it suitable for enterprise deployments where balance between performance and resource efficiency is critical.

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

Providers(5)

Compare all →
ProviderInput (per 1M)Output (per 1M)Type
Mistral AI Studio$0.15$0.15Serverless
OpenRouter$0.02$0.04Serverless
Fireworks AI$0.2$0.2Serverless
Bitdeer AI$0.18$0.54Serverless
SiliconFlow$0.3$0.3Serverless

Rankings

Specifications

FamilyNeMo
Released2024-07-18
Parameters12B
Context128K
ArchitectureDecoder Only
Specializationgeneral
Trainingfinetuning

Created by

Enterprise AI solutions for trust and transparency.

Paris, France
Founded 2023
Website