LLM ReferenceLLM Reference

DBRX Instruct

dbrx-instruct

Deprecated

About

DBRX Instruct, developed by Databricks, is a cutting-edge large language model designed for various natural language processing tasks. It excels in text summarization, question answering, information extraction, and code generation, utilizing a fine-grained mixture-of-experts architecture with 132 billion parameters. With advanced features like rotary position encodings, gated linear units, and grouped query attention, it performs exceptionally across multiple benchmarks, even outperforming some closed-source models. Trained on a vast 12 trillion token dataset, it supports contexts up to 32,000 tokens. Although primarily effective in English, its multilingual strength isn't fully explored. Users should be cautious as it may generate inaccurate or biased outputs.

DBRX Instruct has a 32K-token context window.

DBRX Instruct input tokens at $0.6/1M, output at $1.2/1M.

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

Providers(6)

Compare all →
ProviderInput (per 1M)Output (per 1M)Type
Microsoft Foundry$2.70$2.70Provisioned
Databricks Foundation Model Serving$0.75$2.25ServerlessProvisioned
Together AI$1.2$1.2Serverless
NVIDIA NIMProvisioned
DeepInfra$0.60$1.20Serverless
Fireworks AI$1.2$1.2Serverless

Specifications

FamilyDBRX
Released2024-03-27
Parameters132B
Context32K
ArchitectureMixture of Experts
Specializationgeneral
Trainingfinetuned

Created by

Advancing AI research and model development.

San Francisco, California, United States
Founded 2023
Website