ApX logoApX logo

Qwen3.5-122B-A10B

Active Parameters

122B

Context Length

262.144K

Modality

Multimodal

Architecture

Mixture of Experts (MoE)

License

Apache 2.0

Release Date

24 Feb 2026

Knowledge Cutoff

-

Technical Specifications

Total Expert Parameters

10.0B

Number of Experts

256

Active Experts

9

Attention Structure

Grouped-Query Attention

Hidden Dimension Size

3072

Number of Layers

48

Attention Heads

32

Key-Value Heads

2

Activation Function

SwigLU

Normalization

RMS Normalization

Position Embedding

ROPE

Qwen3.5-122B-A10B

Qwen3.5-122B-A10B is Alibaba Cloud's mid-tier multimodal foundation model, released February 2026. With 122B total parameters and 10B activated through a Mixture-of-Experts architecture (256 experts), it balances high performance with computational efficiency. It achieves strong scores on MMLU-Pro (86.1%), GPQA Diamond (85.5%), SWE-bench Verified (72.4%), and Terminal-Bench 2.0 (41.6%). Features unified vision-language capabilities, 262k native context (extensible to 1M), and excels across reasoning, coding, agentic workflows, and multilingual tasks.

About Qwen 3.5

Qwen 3.5 is Alibaba Cloud's latest-generation foundation model family, released February 2026. It represents a significant leap forward, integrating breakthroughs in multimodal learning (unified vision-language foundation), efficient hybrid architecture (Gated Delta Networks with sparse Mixture-of-Experts), scalable reinforcement learning across million-agent environments, and global linguistic coverage spanning 201 languages. Available under Apache 2.0 license with open weights.


Other Qwen 3.5 Models

Evaluation Benchmarks

No evaluation benchmarks for Qwen3.5-122B-A10B available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
128k
256k

VRAM Required:

Recommended GPUs