LLM ReferenceLLM Reference

Using Mixtral 8x7B on Vultr

Implementation guide · Mixtral · MistralAI

Serverless

Quick Start

  1. 1
    Create an account at Vultr and generate an API key.
  2. 2
    Use the Vultr SDK or REST API to call mixtral-8x7b — see the documentation for request format.
  3. 3
    You'll be billed $0.55/1M input, $2.75/1M output tokens. See full pricing.

Code Examples

See Vultr documentation for integration details.

About Vultr

Vultr offers cloud GPU infrastructure with NVIDIA H100, A100, and AMD MI355X instances for AI workloads. The platform uses hourly billing and supports users deploying their own LLM inference workloads. Does not host pre-trained LLM models or provide managed LLM APIs.

Vultr is a cloud infrastructure company headquartered in West Palm Beach, Florida. The company provides infrastructure-as-a-service (IaaS) including bare metal servers, cloud servers, and GPU instances across a global network of 30+ data centers.

Pricing on Vultr

TypePrice (per 1M)
Input tokens$0.55
Output tokens$2.75

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

About Mixtral 8x7B

Mixtral 8x7B, developed by Mistral AI, features a cutting-edge Mixture of Experts (MoE) architecture, utilizing eight experts with seven billion parameters each, yielding a total of 46.7 billion parameters. This architecture activates only two experts per token, allowing for efficient processing and a 6x faster inference rate compared to Llama 2 70B. The model excels in performance, surpassing Llama 2 70B and competing with GPT-3.5 on numerous benchmarks. It supports multiple languages and can handle context up to 32,000 tokens, enhancing understanding of lengthy text. Designed for diverse tasks, it is strong in code generation and available under a permissive Apache 2.0 license, promoting community engagement. Compatible with various optimization tools, its weights are easily deployable, with Mistral AI continuing to improve its capabilities through performance optimizations and fine-tuning efforts.

Model Specs

Released2023-12-11
Parameters8x7B
Context32K
ArchitectureMixture of Experts
Knowledge cutoff2023-12

Provider

Vultr

Vultr Holdings Corporation

West Palm Beach, Florida, USA