趋近智
活跃参数
235B
上下文长度
262.144K
模态
Reasoning
架构
Mixture of Experts (MoE)
许可证
Apache 2.0
发布日期
25 Jul 2025
知识截止
-
专家参数总数
22.0B
专家数量
128
活跃专家
8
注意力结构
Multi-Head Attention
隐藏维度大小
-
层数
94
注意力头
64
键值头
4
激活函数
-
归一化
-
位置嵌入
Absolute Position Embedding
不同量化方法和上下文大小的显存要求
The Qwen3-235B-A22B-Thinking model is a specialized variant within Alibaba's Qwen3 series of large language models, engineered for complex cognitive tasks requiring advanced reasoning. This model operates as a causal language model and is specifically designed to perform logical deduction, strategic planning, and systematic problem-solving. Its name, incorporating "Thinking," directly reflects its fine-tuning on datasets that emphasize and reward step-by-step analytical processes. This model is distinct from its general-purpose counterparts in the Qwen3 family, which often combine both thinking and non-thinking modes, as it focuses solely on the reasoning mode.
Architecturally, Qwen3-235B-A22B-Thinking leverages a Mixture-of-Experts (MoE) design, which is a cornerstone of the Qwen3 series. This architecture allows the model to achieve high performance while managing computational efficiency. Specifically, the model has a total of 235 billion parameters, but for any given inference pass, it activates approximately 22 billion parameters from a pool of 128 distinct experts, with 8 experts activated per token. This selective activation significantly reduces the computational load and latency compared to traditional dense models where all parameters are engaged. The model incorporates Grouped-Query Attention (GQA) with 64 query heads and 4 key/value heads, optimizing inference speed and memory utilization. It has 94 layers and uses an absolute position embedding.
Regarding performance characteristics and use cases, Qwen3-235B-A22B-Thinking is optimized for scenarios demanding deep analysis, such as logical reasoning, mathematics, science, and coding challenges. The model supports a native context length of 262,144 tokens, a substantial increase from previous iterations, making it highly effective for processing extensive documents and engaging in long-context applications. Its design allows for dynamic control over the reasoning depth, with recommendations for a maximum output length of 81,920 tokens for complex problems to facilitate detailed responses. The model's capabilities extend to multilingual instruction following and tool usage, positioning it for advanced agentic workflows that require sophisticated problem-solving.
The Alibaba Qwen 3 model family comprises dense and Mixture-of-Experts (MoE) architectures, with parameter counts from 0.6B to 235B. Key innovations include a hybrid reasoning system, offering 'thinking' and 'non-thinking' modes for adaptive processing, and support for extensive context windows, enhancing efficiency and scalability.
排名适用于本地LLM。
排名
#9
基准 | 分数 | 排名 |
---|---|---|
Data Analysis LiveBench Data Analysis | 0.68 | 🥈 2 |
Mathematics LiveBench Mathematics | 0.80 | 🥉 3 |
Coding LiveBench Coding | 0.66 | 5 |
Reasoning LiveBench Reasoning | 0.78 | 5 |
Agentic Coding LiveBench Agentic | 0.13 | 7 |