Qwen3.5-Omni Models by Alibaba
AlibabaProprietary
2 models2026Up to 262K ctxFrom $0.1/1M input
About
Qwen3.5-Omni is Alibaba's native omnimodal model family released March 30, 2026, capable of processing text, images, audio, and video simultaneously and generating text or speech responses.
Specifications(2 models)
| Model | Released | Context | Vision | Multimodal | Reasoning | Fn Calling | Tool Use | Structured Outputs |
|---|---|---|---|---|---|---|---|---|
| Qwen3.5-Omni Plus | 2026-03 | 262K | Yes | Yes | Yes | Yes | Yes | Yes |
| Qwen3.5-Omni Flash | 2026-03 | 262K | Yes | Yes | No | Yes | Yes | Yes |
Available From(1 provider)
Pricing
| Model | Provider | Input / 1M | Output / 1M | Type |
|---|---|---|---|---|
| Qwen3.5-Omni Flash | Alibaba Cloud PAI-EAS | $0.1 | $0.8 | Serverless |
| Qwen3.5-Omni Plus | Alibaba Cloud PAI-EAS | $0.4 | $4.8 | Serverless |
Frequently Asked Questions
- What is Qwen3.5-Omni used for?
- Qwen3.5-Omni is used for vision and multimodal work, reasoning, and agent workflows and tool use. The family description and listed model capabilities point to those workloads as the best fit.
- How does Qwen3.5-Omni compare to Tongyi DeepResearch?
- Qwen3.5-Omni by Alibaba is strongest where you need vision and multimodal work, while Tongyi DeepResearch by Alibaba is the closest related family to check for adjacent model selection. Qwen3.5-Omni has 2 listed variants and reaches up to 262K context, while Tongyi DeepResearch reaches up to 131K context, so compare the specs and pricing tables before choosing a production model.
- Which Qwen3.5-Omni model should I use?
- For the lowest listed input price, start with Qwen3.5-Omni Flash through Alibaba Cloud PAI-EAS at $0.1/1M input tokens. For the most capable/latest local choice, evaluate Qwen3.5-Omni Plus with 262K context and reasoning, tool use, function calling, structured outputs, and multimodal inputs.


