LLM ReferenceLLM Reference

OLMo 7B

About

OLMo 7B is a large language model created by the Allen Institute for Artificial Intelligence (AI2), characterized by its open-source nature where model weights, training data, code, and evaluation tools have been publicly released. It utilizes a decoder-only transformer architecture, featuring 32 layers, a hidden size of 4096, and 32 attention heads, among other features. Trained on 2.5 trillion tokens from the Dolma dataset, this model excels in text generation, question answering, and language understanding, with performance metrics often comparable to or exceeding those of similar-sized models. It also boasts various architectural advancements such as SwiGLU activation functions and rotary positional embeddings. Despite its capabilities, users should be aware of its limitations concerning factual accuracy, bias, and context length.

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

Providers(2)

Compare all →
ProviderInput (per 1M)Output (per 1M)Type
Together AI$0.2$0.2Serverless
Replicate APIServerless

Benchmark Scores(1)

BenchmarkScoreVersionSource
Massive Multitask Language Understanding62.35-shotOpen LLM Leaderboard

Rankings

Specifications

FamilyOLMo
Released2024-02-01
Parameters7B
ArchitectureDecoder Only
Specializationgeneral
Trainingfinetuning

Created by

Advocating for open science and source

Seattle, Washington, United States
Founded 2014
Website