LLM ReferenceLLM Reference
Fireworks AI

Mistral Small on Fireworks AI

Mistral Small · MistralAI

Serverless

Why use Mistral Small on Fireworks AI?

Fireworks AI offers Mistral Small with competitive pricing. Fireworks AI offers a generative AI platform as a service, focusing on rapid product iteration and cost-efficient AI deployment.

Compare Mistral Small across 5 providers to find the best fit for your use case

Compare Mistral Small Across Providers

ProviderInput (per 1M)Output (per 1M)
Microsoft Foundry$1.00$3.00
AWS Bedrock$1.00$3.00
Mistral AI Studio$0.10$0.30
Fireworks AI
DeepInfra

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

About Mistral Small

Mistral Small available on AWS Bedrock

FAQ

What is the context window for Mistral Small on Fireworks AI?

Mistral Small supports a 32,000 token context window on Fireworks AI.

How does Fireworks AI compare to other Mistral Small providers?

Mistral Small is available from 5 providers. The cheapest input pricing is $0.10/1M tokens from Mistral AI Studio.

Who created Mistral Small?

Mistral Small was created by MistralAI as part of the Mistral Small model family.

Is Mistral Small open source?

Mistral Small is open source under Apache 2.0 according to the seed data.

Get Started

Model Specs

Released2024-02-26
Context32K
ArchitectureDecoder Only

Related Models on Fireworks AI