ApX logoApX logo

Qwen3.5-4B

Parameters

4B

Context Length

262.144K

Modality

Multimodal

Architecture

Dense

License

Apache 2.0

Release Date

24 Feb 2026

Knowledge Cutoff

-

Technical Specifications

Attention Structure

Grouped-Query Attention

Hidden Dimension Size

2560

Number of Layers

32

Attention Heads

16

Key-Value Heads

4

Activation Function

SwigLU

Normalization

RMS Normalization

Position Embedding

ROPE

Qwen3.5-4B

Qwen3.5-4B is Alibaba Cloud's compact multimodal foundation model with 4B parameters, released February 2026. It uses a hybrid architecture combining Gated Delta Networks and Gated Attention in an 8×(3×DeltaNet→FFN→1×Attention→FFN) pattern. It achieves MMLU-Pro (79.1%), GPQA Diamond (76.2%), HMMT benchmarks (74%/77%), and strong vision-language scores. Features unified vision-language capabilities, 262k native context (extensible to 1M), multi-token prediction training, and delivers efficient performance across reasoning, coding, multimodal understanding, and multilingual tasks covering 201 languages.

About Qwen 3.5

Qwen 3.5 is Alibaba Cloud's latest-generation foundation model family, released February 2026. It represents a significant leap forward, integrating breakthroughs in multimodal learning (unified vision-language foundation), efficient hybrid architecture (Gated Delta Networks with sparse Mixture-of-Experts), scalable reinforcement learning across million-agent environments, and global linguistic coverage spanning 201 languages. Available under Apache 2.0 license with open weights.


Other Qwen 3.5 Models

Evaluation Benchmarks

No evaluation benchmarks for Qwen3.5-4B available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
128k
256k

VRAM Required:

Recommended GPUs