LLM ReferenceLLM Reference

SubQ Models by Subquadratic

SubquadraticProprietary
1 model2026Up to 1M ctx

About

The SubQ model family from Subquadratic Inc. features a fully sub-quadratic sparse-attention architecture that scales linearly with context length (O(n) complexity). Designed for long-context reasoning, the family targets 12M-token context windows with significantly reduced compute costs vs. standard transformer-based models.

Specifications(1 models)

SubQ model specifications comparison
ModelReleasedContextReasoningFn CallingTool Use
SubQ 1M-Preview2026-051MYesYesYes

Available From(1 provider)

Frequently Asked Questions

What is SubQ used for?
SubQ is used for reasoning and agent workflows and tool use. The family description and listed model capabilities point to those workloads as the best fit.
How does SubQ compare to Claude 3?
SubQ by Subquadratic is strongest where you need reasoning, while Claude 3 by Anthropic is the closest related family to check for vision and multimodal work. SubQ has 1 listed variant and reaches up to 1M context, while Claude 3 reaches up to 200K context, so compare the specs and pricing tables before choosing a production model.
Which SubQ model should I use?
If price is the main constraint, use the pricing table first because SubQ does not have complete provider pricing in the local data. For the most capable/latest local choice, evaluate SubQ 1M-Preview with 1M context and reasoning, tool use, and function calling.

Models(1)