趋近智
活跃参数
671B
上下文长度
128K
模态
Text
架构
Mixture of Experts (MoE)
许可证
MIT
发布日期
10 Jan 2026
训练数据截止日期
-
专家参数总数
37.0B
专家数量
-
活跃专家
-
注意力结构
Multi-Head Attention
隐藏维度大小
-
层数
-
注意力头
-
键值头
-
激活函数
-
归一化
-
位置嵌入
Absolute Position Embedding
不同量化方法和上下文大小的显存要求
DeepSeek-V3.2 Thinking is the reasoning-enhanced variant of DeepSeek-V3.2, specifically optimized for complex problem-solving through chain-of-thought reasoning. Based on the same 671B parameter MoE architecture with 37B activated parameters, this model is fine-tuned to produce detailed reasoning traces before generating final answers. Excels at multi-step logical reasoning, mathematical proofs, algorithmic problem-solving, and tasks requiring explicit step-by-step thinking. Achieves enhanced performance on reasoning benchmarks: 94.8% on MATH-500 (with reasoning), 85.2% on Codeforces, and 73.4% on AIME. The thinking mode provides transparency into the model's reasoning process, making it ideal for educational applications, research, debugging complex logic, and scenarios where interpretability is crucial. Supports 128k context window with strong multilingual reasoning capabilities. MIT licensed.
DeepSeek-V3 is a Mixture-of-Experts (MoE) language model comprising 671B parameters with 37B activated per token. Its architecture incorporates Multi-head Latent Attention and DeepSeekMoE for efficient inference and training. Innovations include an auxiliary-loss-free load balancing strategy and a multi-token prediction objective, trained on 14.8T tokens.
排名
#21
| 基准 | 分数 | 排名 |
|---|---|---|
Data Analysis LiveBench Data Analysis | 0.73 | ⭐ 6 |
Mathematics LiveBench Mathematics | 0.85 | ⭐ 8 |
Reasoning LiveBench Reasoning | 0.77 | 14 |
Agentic Coding LiveBench Agentic | 0.40 | 18 |
Coding LiveBench Coding | 0.70 | 31 |
Graduate-Level QA GPQA | 0.82 | 42 |