LLM Reference

OLMo 1.7 7B

About

The OLMo 1.7–7B is a cutting-edge large language model developed by the Allen Institute for AI (AI2). It is designed to be fully open-source, featuring a decoder-only transformer architecture with 7 billion parameters. The model excels in tasks like text generation, question answering, and language modeling, and is trained on a diverse dataset of approximately 2.3 trillion tokens. With a strong emphasis on transparency, OLMo 1.7 provides access to its weights, training code, and evaluation tools. It performs well across various benchmarks, although it may generate biased content without proper safeguards and might have variable performance in complex tasks.

Capabilities

MultimodalFunction CallingTool UseJSON Mode

Specifications

FamilyOLMo
Parameters7B
ArchitectureDecoder Only
Specializationgeneral