LLM Reference

OLMo 7B on Together AI

OLMo · Allen Institute for Artificial Intelligence (AI2)

Serverless

Pricing

TypePrice (per 1M)
Input tokens$0.20
Output tokens$0.20

Capabilities

VisionMultimodalReasoningFunction CallingTool UseJSON ModeCode Execution

About OLMo 7B

OLMo 7B is a large language model created by the Allen Institute for Artificial Intelligence (AI2), characterized by its open-source nature where model weights, training data, code, and evaluation tools have been publicly released. It utilizes a decoder-only transformer architecture, featuring 32 layers, a hidden size of 4096, and 32 attention heads, among other features. Trained on 2.5 trillion tokens from the Dolma dataset, this model excels in text generation, question answering, and language understanding, with performance metrics often comparable to or exceeding those of similar-sized models. It also boasts various architectural advancements such as SwiGLU activation functions and rotary positional embeddings. Despite its capabilities, users should be aware of its limitations concerning factual accuracy, bias, and context length.

Get Started

Model Specs

Released2024-02-01
Parameters7B
ArchitectureDecoder Only

Related Models on Together AI