LLM ReferenceLLM Reference

Trinity-Large-Preview on OpenRouter

Trinity · Arcee AI

ServerlessOpen Source

Compare Trinity-Large-Preview Across Providers

ProviderInput (per 1M)Output (per 1M)
OpenRouterFreeFree
Arcee AI

Pricing

TypePrice (per 1M)
Input tokensFree
Output tokensFree

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

About Trinity-Large-Preview

400B sparse MoE instruct model with 13B active parameters per token, served at 128K context via 8-bit quantized API. Trained on 20T tokens. Production-ready for agentic and tool-use applications; predecessor to Trinity-Large-Thinking. Available free on OpenRouter.

Get Started

Model Specs

Released2025-02-01
Parameters400B
Context128K
ArchitectureSparse Mixture of Experts (MoE)