Compare Trinity-Large-Preview Across Providers
| Provider | Input (per 1M) | Output (per 1M) |
|---|---|---|
| OpenRouter | Free | Free |
| Arcee AI | — | — |
Pricing
| Type | Price (per 1M) |
|---|---|
| Input tokens | Free |
| Output tokens | Free |
Capabilities
VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution
About Trinity-Large-Preview
400B sparse MoE instruct model with 13B active parameters per token, served at 128K context via 8-bit quantized API. Trained on 20T tokens. Production-ready for agentic and tool-use applications; predecessor to Trinity-Large-Thinking. Available free on OpenRouter.
Get Started
Model Specs
Released2025-02-01
Parameters400B
Context128K
ArchitectureSparse Mixture of Experts (MoE)