LLM Reference
Stability Developer Platform

Stable LM 7B on Stability Developer Platform

StableLM · Stability AI

Serverless

Capabilities

VisionMultimodalReasoningFunction CallingTool UseJSON ModeCode Execution

About Stable LM 7B

StableLM 7B, a large language model by Stability AI, features a 7-billion parameter, decoder-only architecture designed to predict subsequent words based on context. Built on the robust NeoX transformer framework, it excels in managing long text sequences with its 4096-token context window. Pre-trained on a substantial dataset of about 1.5 trillion tokens, this model demonstrates strong capabilities in generating human-like text, performing tasks such as summarization and translation. However, it shares common limitations with other LLMs, including the potential for bias and generating inappropriate content. Although StableLM 7B is versatile, it has been succeeded by improved versions like StableLM-Base-Alpha-7B-v2, which address certain shortcomings of the original model.

Get Started

Model Specs

Released2023-04-20
Parameters7B
ArchitectureDecoder Only