LLM ReferenceLLM Reference

SubQ 1M-Preview on SubQ API

SubQ · Subquadratic

Serverless

Why use SubQ 1M-Preview on SubQ API?

SubQ API offers SubQ 1M-Preview with competitive pricing. Subquadratic is a frontier AI research and infrastructure company launched in May 2026 with $29M in seed funding.

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

About SubQ 1M-Preview

SubQ 1M-Preview is Subquadratic's first large language model, built on a fully sub-quadratic sparse-attention architecture that scales compute linearly with context length (O(n) vs. traditional O(n²)). Supports a production context window of 1M tokens (architecture tested to 12M). Achieves 81.8% on SWE-Bench Verified, 95.0% on RULER @128K, and 65.9% on MRCR v2 (8-needle, 1M). Claims 50x faster and 50x cheaper than leading frontier models at 1M context length. Available via OpenAI-compatible API with streaming and tool use support. Model is proprietary and not open-source; fine-tuning for customer-specific use cases is mentioned as a future capability.

FAQ

What is the context window for SubQ 1M-Preview on SubQ API?

SubQ 1M-Preview supports a 1,000,000 token context window on SubQ API.

Who created SubQ 1M-Preview?

SubQ 1M-Preview was created by Subquadratic as part of the SubQ model family.

Is SubQ 1M-Preview open source?

SubQ 1M-Preview is not open source; the seed data lists it as proprietary.

Get Started

Model Specs

Released2026-05-05
Context1M
ArchitectureDecoder Only