LLM ReferenceLLM Reference
NVIDIA NIM

Mistral Large on NVIDIA NIM

Mistral Large · MistralAI

Provisioned

Why use Mistral Large on NVIDIA NIM?

NVIDIA NIM offers Mistral Large with competitive pricing. NVIDIA NIM is NVIDIA's deployment platform for GPU-accelerated inference microservices.

Compare Mistral Large across 8 providers to find the best fit for your use case

Compare Mistral Large Across Providers

ProviderInput (per 1M)Output (per 1M)
NVIDIA NIM
Microsoft Foundry$4.00$12.00
AWS Bedrock$2.00$6.00
Mistral AI Studio$2.00$6.00
IBM watsonx$10.00$10.00
View all 8 providers →

Pricing

TypeRate
GPU Hour Rate$1.00/GPU·hr
GPU Config4xH100

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

About Mistral Large

Mistral Large available on AWS Bedrock

FAQ

What is the context window for Mistral Large on NVIDIA NIM?

Mistral Large supports a 32,000 token context window on NVIDIA NIM.

How does NVIDIA NIM compare to other Mistral Large providers?

Mistral Large is available from 8 providers. The cheapest input pricing is $0.32/1M tokens from GCP Vertex AI.

Who created Mistral Large?

Mistral Large was created by MistralAI as part of the Mistral Large model family.

Is Mistral Large open source?

Mistral Large is not open source; the seed data lists it as proprietary.

Get Started