LLM Reference

Stable LM 7B

About

StableLM 7B, a large language model by Stability AI, features a 7-billion parameter, decoder-only architecture designed to predict subsequent words based on context. Built on the robust NeoX transformer framework, it excels in managing long text sequences with its 4096-token context window. Pre-trained on a substantial dataset of about 1.5 trillion tokens, this model demonstrates strong capabilities in generating human-like text, performing tasks such as summarization and translation. However, it shares common limitations with other LLMs, including the potential for bias and generating inappropriate content. Although StableLM 7B is versatile, it has been succeeded by improved versions like StableLM-Base-Alpha-7B-v2, which address certain shortcomings of the original model.

Capabilities

MultimodalFunction CallingTool UseJSON Mode

Providers(2)

ProviderInput (per 1M)Output (per 1M)Type
Stability Developer PlatformServerless
Replicate APIServerless

Specifications

FamilyStableLM
Released2023-04-20
Parameters7B
ArchitectureDecoder Only
Specializationgeneral