The Qwen3.5 native vision-language Flash models are built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts model, achieving higher inference efficiency. Compared to the 3 series, these models deliver a leap forward in performance for both pure text and multimodal tasks, offering fast response times while balancing inference speed and overall performance.
Modalities
Input Price
35% off$0.065per 1M
Output Price
35% off$0.26per 1M
Context
1M
Weekly Tokens
112B
Released
Feb 25, 2026
