LLM ReferenceLLM Reference
DeepInfra

Using Mixtral 8x7B Instruct v0.1 on DeepInfra

Implementation guide · Mixtral · MistralAI

ServerlessOpen Source

Quick Start

  1. 1
    Create an account at DeepInfra and generate an API key.
  2. 2
    Use the DeepInfra SDK or REST API to call mixtral-8x7b-instruct-v0.1 — see the documentation for request format.
  3. 3
    You'll be billed $0.15/1M input, $0.45/1M output tokens. See full pricing.

Code Examples

Install
pip install openai
API key
DEEPINFRA_API_KEY
Model ID
mixtral-8x7b-instruct-v0.1

DeepInfra uses "organization/model-name" format, e.g. "meta-llama/Meta-Llama-3-8B-Instruct" or "mistralai/Mistral-7B-Instruct-v0.3". See the DeepInfra model catalog for exact IDs.

import os
from openai import OpenAI

client = OpenAI(
    api_key=os.environ["DEEPINFRA_API_KEY"],
    base_url="https://api.deepinfra.com/v1/openai"
)
response = client.chat.completions.create(
    model="mixtral-8x7b-instruct-v0.1",
    messages=[{"role": "user", "content": "Hello"}]
)
print(response.choices[0].message.content)

About DeepInfra

DeepInfra offers serverless AI inference with a simple API, supporting hundreds of models across text generation, embeddings, and more. Pay-per-token pricing with no upfront commitments.

DeepInfra is a cloud inference platform offering cost-effective access to open-source AI models. It provides serverless inference for leading models from Meta, Mistral, Alibaba, and others with competitive token-based pricing.

Pricing on DeepInfra

TypePrice (per 1M)
Input tokens$0.15
Output tokens$0.45

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

About Mixtral 8x7B Instruct v0.1

Mixtral 8x7B Instruct v0.1 via AWS Bedrock

Model Specs

Released2023-12-10
Parameters56B
Context33K
ArchitectureDecoder Only
Knowledge cutoff2023-12

Provider

DeepInfra
DeepInfra

San Francisco, California, United States