ApX 标志ApX 标志

趋近智

Qwen3-4B

参数

4B

上下文长度

32.768K

模态

Text

架构

Dense

许可证

Apache 2.0

发布日期

29 Apr 2025

训练数据截止日期

Mar 2025

技术规格

注意力结构

Grouped-Query Attention

隐藏维度大小

4096

层数

40

注意力头

48

键值头

8

激活函数

Swish

归一化

RMS Normalization

位置嵌入

ROPE

Qwen3-4B

Qwen3-4B is a 4-billion parameter dense causal language model developed by Alibaba, belonging to the third generation of the Qwen series. A fundamental innovation in this model is its unified architecture that supports dual-mode operation, allowing for dynamic switching between 'thinking' and 'non-thinking' states. In the thinking mode, the model performs extensive, multi-step logical reasoning similar to chain-of-thought processing, making it effective for complex mathematical problems and intricate code generation. Conversely, the non-thinking mode is optimized for low-latency, direct responses in general conversational contexts, providing an efficient alternative for tasks where depth of reasoning is secondary to speed.

Technically, the model is built on a transformer architecture with 36 layers and 4.0 billion total parameters. It utilizes Grouped Query Attention (GQA) with 32 attention heads for queries and 8 key-value heads, ensuring high computational throughput during inference. The model employs Rotary Position Embeddings (RoPE) and is natively trained on a 32,768-token context window, which can be extended up to 131,072 tokens using YaRN scaling. This architectural foundation is further refined through a three-stage pre-training pipeline involving 36 trillion tokens across 119 languages, prioritizing a mix of high-quality STEM, coding, and multilingual data to ensure broad-spectrum proficiency.

Qwen3-4B is designed for versatility in deployment, particularly in environments requiring sophisticated reasoning within a compact parameter footprint. Its native support for thinking modes allows it to function as a reasoning engine for complex instruction following and agentic workflows without requiring a separate specialized model. The integration of SwiGLU activations and RMSNorm ensures stable training dynamics, while the inclusion of 'tied embeddings' specifically in the smaller variants like the 4B model helps optimize memory usage. It is highly effective for cross-lingual tasks, tool-based interactions, and structured output generation across a wide variety of domains.

关于 Qwen 3

The Alibaba Qwen 3 model family comprises dense and Mixture-of-Experts (MoE) architectures, with parameter counts from 0.6B to 235B. Key innovations include a hybrid reasoning system, offering 'thinking' and 'non-thinking' modes for adaptive processing, and support for extensive context windows, enhancing efficiency and scalability.


其他 Qwen 3 模型

评估基准

没有可用的 Qwen3-4B 评估基准。

排名

排名

-

编程排名

-

模型透明度

总分

76

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
16k
32k

所需显存:

推荐 GPU