LLM Reference

OLMo 7B Twin-2T on Together AI

OLMo · Allen Institute for Artificial Intelligence (AI2)

Serverless

Pricing

TypePrice (per 1M)
Input tokens$0.20
Output tokens$0.20

Capabilities

VisionMultimodalReasoningFunction CallingTool UseJSON ModeCode Execution

About OLMo 7B Twin-2T

The OLMo 7B Twin-2T is a robust open-source large language model that implements a decoder-only transformer architecture with enhancements for greater stability and performance. It features non-parametric layer normalization and SwiGLU activation functions, along with Rotary positional embeddings for better sequence handling. The model, comprising 32 layers and 32 attention heads, was trained on approximately 2 trillion tokens and supports a context length of 2048. It is notable for its transparency in AI research, as all training data, code, and evaluations are publicly accessible, promoting collaborative advancements. The model excels in various NLP tasks and has options for fine-tuning, while its developers advocate for responsible AI usage to mitigate risks of bias and inaccuracies.

Get Started

Model Specs

Released2024-02-01
Parameters7B
ArchitectureDecoder Only

Related Models on Together AI