LLM Reference

OLMo 7B

About

OLMo 7B is a large language model created by the Allen Institute for Artificial Intelligence (AI2), characterized by its open-source nature where model weights, training data, code, and evaluation tools have been publicly released. It utilizes a decoder-only transformer architecture, featuring 32 layers, a hidden size of 4096, and 32 attention heads, among other features. Trained on 2.5 trillion tokens from the Dolma dataset, this model excels in text generation, question answering, and language understanding, with performance metrics often comparable to or exceeding those of similar-sized models. It also boasts various architectural advancements such as SwiGLU activation functions and rotary positional embeddings. Despite its capabilities, users should be aware of its limitations concerning factual accuracy, bias, and context length.

Capabilities

MultimodalFunction CallingTool UseJSON Mode

Providers(2)

ProviderInput (per 1M)Output (per 1M)Type
Together AI API$0.2$0.2
Serverless
Replicate API
Serverless

Specifications

FamilyOLMo
Parameters7B
ArchitectureDecoder Only
Specializationgeneral