LLM ReferenceLLM Reference

Qwen3.5-Omni Models by Alibaba

AlibabaProprietary
2 models2026Up to 262K ctxFrom $0.1/1M input

About

Qwen3.5-Omni is Alibaba's native omnimodal model family released March 30, 2026, capable of processing text, images, audio, and video simultaneously and generating text or speech responses.

Specifications(2 models)

Qwen3.5-Omni model specifications comparison
ModelReleasedContextVisionMultimodalReasoningFn CallingTool UseStructured Outputs
Qwen3.5-Omni Plus2026-03262KYesYesYesYesYesYes
Qwen3.5-Omni Flash2026-03262KYesYesNoYesYesYes

Available From(1 provider)

Pricing

Qwen3.5-Omni model pricing by provider
ModelProviderInput / 1MOutput / 1MType
Qwen3.5-Omni FlashAlibaba Cloud PAI-EAS$0.1$0.8Serverless
Qwen3.5-Omni PlusAlibaba Cloud PAI-EAS$0.4$4.8Serverless

Frequently Asked Questions

What is Qwen3.5-Omni used for?
Qwen3.5-Omni is used for vision and multimodal work, reasoning, and agent workflows and tool use. The family description and listed model capabilities point to those workloads as the best fit.
How does Qwen3.5-Omni compare to Tongyi DeepResearch?
Qwen3.5-Omni by Alibaba is strongest where you need vision and multimodal work, while Tongyi DeepResearch by Alibaba is the closest related family to check for adjacent model selection. Qwen3.5-Omni has 2 listed variants and reaches up to 262K context, while Tongyi DeepResearch reaches up to 131K context, so compare the specs and pricing tables before choosing a production model.
Which Qwen3.5-Omni model should I use?
For the lowest listed input price, start with Qwen3.5-Omni Flash through Alibaba Cloud PAI-EAS at $0.1/1M input tokens. For the most capable/latest local choice, evaluate Qwen3.5-Omni Plus with 262K context and reasoning, tool use, function calling, structured outputs, and multimodal inputs.

Models(2)