Why use Mistral Large on Fireworks AI?
Fireworks AI offers Mistral Large with pay-as-you-go pricing at $0.90/1M input tokens. Fireworks AI offers a generative AI platform as a service, focusing on rapid product iteration and cost-efficient AI deployment.
Compare Mistral Large across 8 providers to find the best fit for your use caseCompare Mistral Large Across Providers
| Provider | Input (per 1M) | Output (per 1M) |
|---|---|---|
| NVIDIA NIM | — | — |
| Microsoft Foundry | $4.00 | $12.00 |
| AWS Bedrock | $2.00 | $6.00 |
| Mistral AI Studio | $2.00 | $6.00 |
| IBM watsonx | $10.00 | $10.00 |
Pricing
| Type | Price (per 1M) |
|---|---|
| Input tokens | $0.90 |
| Output tokens | $0.90 |
Capabilities
VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution
About Mistral Large
Mistral Large available on AWS Bedrock
FAQ
What does Mistral Large cost on Fireworks AI?
On Fireworks AI, Mistral Large costs $0.9 per 1M input tokens and $0.9 per 1M output tokens.
What is the context window for Mistral Large on Fireworks AI?
Mistral Large supports a 32,000 token context window on Fireworks AI.
How does Fireworks AI compare to other Mistral Large providers?
Mistral Large is available from 8 providers. The cheapest input pricing is $0.32/1M tokens from GCP Vertex AI.
Who created Mistral Large?
Mistral Large was created by MistralAI as part of the Mistral Large model family.
Is Mistral Large open source?
Mistral Large is not open source; the seed data lists it as proprietary.
Get Started
Model Specs
Released2024-02-08
Context32k
Knowledge cutoff2024-03