Pricing
| Type | Price (per 1M) |
|---|---|
| Input tokens | $0.02 |
| Output tokens | $0.04 |
Capabilities
VisionMultimodalReasoningFunction CallingTool UseJSON ModeCode Execution
About Mistral NeMo (2407)
Mistral NeMo is a 12B parameter open-source language model developed by Mistral AI, designed for efficient performance and reasoning tasks. With a 128K token context window, it excels at handling long documents and complex reasoning. The model is optimized for fast inference while maintaining strong performance across multiple benchmarks, making it suitable for enterprise deployments where balance between performance and resource efficiency is critical.
Model Specs
Released2024-07-18
Parameters12B
Context128K
ArchitectureDecoder Only