LLM ReferenceLLM Reference
Microsoft Foundry

Mistral 7B v0.1 on Microsoft Foundry

Mistral 7B · MistralAI

Provisioned

Compare Mistral 7B v0.1 Across Providers

ProviderInput (per 1M)Output (per 1M)
GCP Vertex AI
OctoAI API$0.15$0.15
DeepInfra$0.05$0.15
Mistral AI Studio$0.06$0.18
Baseten API
View all 14 providers →

Pricing

TypePrice (per 1M)
Input tokens$0.14
Output tokens$0.14

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

About Mistral 7B v0.1

Mistral 7B v0.1 is an advanced open-source large language model built by Mistral AI, consisting of 7 billion parameters. It's designed to deliver high performance and efficiency, outperforming many similar-sized models in various benchmarks. The model employs a transformer architecture with innovative features like Sliding Window Attention, Grouped-Query Attention, and a Byte-fallback BPE tokenizer, enhancing speed, reducing computational costs, and improving robustness. Capable of generating human-like text, following instructions effectively, and excelling in areas such as reasoning and mathematics, Mistral 7B v0.1 does have limitations, such as a lack of built-in moderation and a potential for hallucinations. Subsequent versions have sought to address these limitations while introducing extended context windows and improved instruction-following capabilities.

Get Started