Using Mixtral 8x7B Instruct v0.1 on DeepInfra
Implementation guide · Mixtral · MistralAI
ServerlessOpen Source
Quick Start
- 1
- 2Use the DeepInfra SDK or REST API to call
mixtral-8x7b-instruct-v0.1— see the documentation for request format. - 3
Code Examples
Install
pip install openaiAPI key
DEEPINFRA_API_KEYModel ID
mixtral-8x7b-instruct-v0.1DeepInfra uses "organization/model-name" format, e.g. "meta-llama/Meta-Llama-3-8B-Instruct" or "mistralai/Mistral-7B-Instruct-v0.3". See the DeepInfra model catalog for exact IDs.
import os
from openai import OpenAI
client = OpenAI(
api_key=os.environ["DEEPINFRA_API_KEY"],
base_url="https://api.deepinfra.com/v1/openai"
)
response = client.chat.completions.create(
model="mixtral-8x7b-instruct-v0.1",
messages=[{"role": "user", "content": "Hello"}]
)
print(response.choices[0].message.content)About DeepInfra
DeepInfra offers serverless AI inference with a simple API, supporting hundreds of models across text generation, embeddings, and more. Pay-per-token pricing with no upfront commitments.
DeepInfra is a cloud inference platform offering cost-effective access to open-source AI models. It provides serverless inference for leading models from Meta, Mistral, Alibaba, and others with competitive token-based pricing.
Pricing on DeepInfra
| Type | Price (per 1M) |
|---|---|
| Input tokens | $0.15 |
| Output tokens | $0.45 |
Capabilities
VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution
Model Specs
Released2023-12-10
Parameters56B
Context33K
ArchitectureDecoder Only
Knowledge cutoff2023-12