LLM ReferenceLLM Reference

Trinity-Large-Preview on Arcee AI

Trinity · Arcee AI

ServerlessOpen Source

Get Started with Trinity-Large-Preview on Arcee AI

Arcee AI offers access to Trinity-Large-Preview with a 128K context window. Custom model fine-tuning and inference API platform

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

About Trinity-Large-Preview

400B sparse MoE instruct model with 13B active parameters per token, served at 128K context via 8-bit quantized API. Trained on 20T tokens. Production-ready for agentic and tool-use applications; predecessor to Trinity-Large-Thinking. Available free on OpenRouter.

Model Specs

Released2025-02-01
Parameters400B
Context128K
ArchitectureSparse Mixture of Experts (MoE)