LLM Reference
MPT

MPT

Databricks Mosaic
CC-BY-NC-SA-4.0

About

The MosaicML Pretrained Transformer (MPT) family is a collection of advanced, open-source large language models designed for diverse applications, available for commercial use. These models stand out for their decoder-only architecture reminiscent of GPT models, offering enhanced performance through optimized layer implementations and increased training stability. Notably, the MPT models eliminate context length limitations via ALiBi (Attention with Linear Biases), replacing traditional positional embeddings. The MPT family encompasses the base model MPT-7B, alongside specialized variants like MPT-7B-Instruct, MPT-7B-Chat, and MPT-7B-StoryWriter-65k+, each fine-tuned for distinct tasks ranging from instruction-following to storytelling. They were developed using a vast dataset comprising 1 trillion tokens of text and code, underscoring their capability to process and generate high-quality text outputs 125.

Models(2)

Details

LicenseCC-BY-NC-SA-4.0
Models2

Links

Website