LLM ReferenceLLM Reference

Stable LM 7B

stable-lm-7b

About

StableLM 7B, a large language model by Stability AI, features a 7-billion parameter, decoder-only architecture designed to predict subsequent words based on context. Built on the robust NeoX transformer framework, it excels in managing long text sequences with its 4096-token context window. Pre-trained on a substantial dataset of about 1.5 trillion tokens, this model demonstrates strong capabilities in generating human-like text, performing tasks such as summarization and translation. However, it shares common limitations with other LLMs, including the potential for bias and generating inappropriate content. Although StableLM 7B is versatile, it has been succeeded by improved versions like StableLM-Base-Alpha-7B-v2, which address certain shortcomings of the original model.

Stable LM 7B input tokens at $0.05/1M, output at $0.25/1M.

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

Providers(2)

Compare all →
ProviderInput (per 1M)Output (per 1M)Type
Stability Developer PlatformServerless
Replicate API$0.05$0.25Serverless

Rankings

Specifications

FamilyStableLM
Released2023-04-20
Parameters7B
ArchitectureDecoder Only
Specializationgeneral
Trainingfinetuned

Created by

Open-source generative AI models.

London, United Kingdom
Founded 2020
Website