LLM ReferenceLLM Reference
Hugging Face Inference Endpoints

Mistral 7B v0.1 on Hugging Face Inference Endpoints

Mistral 7B · MistralAI

Serverless

Why use Mistral 7B v0.1 on Hugging Face Inference Endpoints?

Hugging Face Inference Endpoints offers Mistral 7B v0.1 with pay-as-you-go pricing at $0.05/1M input tokens. Hugging Face is a leading AI community and platform dedicated to democratizing artificial intelligence.

Compare Mistral 7B v0.1 across 16 providers to find the best fit for your use case

Compare Mistral 7B v0.1 Across Providers

ProviderInput (per 1M)Output (per 1M)
GCP Vertex AI$0.08$0.24
OctoAI API (Deprecated)$0.15$0.15
DeepInfra$0.05$0.15
Mistral AI Studio$0.25$0.25
Baseten API
View all 16 providers →

Pricing

TypePrice (per 1M)
Input tokens$0.05
Output tokens$0.15

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

About Mistral 7B v0.1

Mistral 7B v0.1 is an advanced open-source large language model built by Mistral AI, consisting of 7 billion parameters. It's designed to deliver high performance and efficiency, outperforming many similar-sized models in various benchmarks. The model employs a transformer architecture with innovative features like Sliding Window Attention, Grouped-Query Attention, and a Byte-fallback BPE tokenizer, enhancing speed, reducing computational costs, and improving robustness. Capable of generating human-like text, following instructions effectively, and excelling in areas such as reasoning and mathematics, Mistral 7B v0.1 does have limitations, such as a lack of built-in moderation and a potential for hallucinations. Subsequent versions have sought to address these limitations while introducing extended context windows and improved instruction-following capabilities.

FAQ

What does Mistral 7B v0.1 cost on Hugging Face Inference Endpoints?

On Hugging Face Inference Endpoints, Mistral 7B v0.1 costs $0.05 per 1M input tokens and $0.15 per 1M output tokens.

What is the context window for Mistral 7B v0.1 on Hugging Face Inference Endpoints?

Mistral 7B v0.1 supports a 8,000 token context window on Hugging Face Inference Endpoints.

How does Hugging Face Inference Endpoints compare to other Mistral 7B v0.1 providers?

Mistral 7B v0.1 is available from 16 providers. The cheapest input pricing is $0.05/1M tokens from DeepInfra.

Who created Mistral 7B v0.1?

Mistral 7B v0.1 was created by MistralAI as part of the Mistral 7B model family.

Is Mistral 7B v0.1 open source?

Mistral 7B v0.1 is open source under Apache 2.0 according to the seed data.

Get Started