ApX 标志ApX 标志

趋近智

DeepSeek-V3.2 Thinking

活跃参数

671B

上下文长度

128K

模态

Text

架构

Mixture of Experts (MoE)

许可证

MIT

发布日期

10 Jan 2026

训练数据截止日期

Jul 2024

技术规格

专家参数总数

37.0B

专家数量

256

活跃专家

8

注意力结构

Multi-Head Attention

隐藏维度大小

7168

层数

61

注意力头

128

键值头

1

激活函数

SwigLU

归一化

RMS Normalization

位置嵌入

Absolute Position Embedding

DeepSeek-V3.2 Thinking

DeepSeek-V3.2 Thinking is an advanced reasoning-enhanced language model that integrates large-scale reinforcement learning with a massive mixture-of-experts (MoE) architecture. As the reasoning-specialized variant of the V3.2 series, it is engineered to prioritize logical consistency and systematic problem-solving through an explicit chain-of-thought (CoT) process. The model is specifically optimized for complex domains such as mathematics, algorithmic programming, and multi-step agentic workflows, where it generates detailed reasoning traces prior to producing a final response. This transparency into the model's internal logic allows for more reliable verification of complex outputs and supports sophisticated tool-integration scenarios.

Technically, the model utilizes a sparse Mixture-of-Experts (MoE) framework comprising 671 billion total parameters, with 37 billion parameters activated per token to maintain high computational efficiency. A significant architectural advancement in this version is the introduction of DeepSeek Sparse Attention (DSA), which reduces the computational complexity of the attention mechanism from quadratic to nearly linear. This innovation, instantiated under Multi-Head Latent Attention (MLA), enables the model to process long-context sequences with substantially lower memory and compute overhead. The model also employs a Group Relative Policy Optimization (GRPO) framework for reinforcement learning, which stabilizes training by utilizing group-based baselines instead of a separate critic network.

DeepSeek-V3.2 Thinking is designed for high-stakes reasoning applications, including scientific research, debugging intricate software logic, and executing autonomous agentic tasks. It supports a 128k context window and introduces a 'thinking with tools' capability, allowing the model to perform interleaved reasoning and API calls. The integration of Multi-Token Prediction (MTP) during training further enhances its internal representations, leading to faster convergence and more robust performance on reasoning-heavy benchmarks. Released under the MIT license, this model provides an open-weight foundation for researchers and developers seeking to deploy frontier-class reasoning capabilities in local or enterprise environments.

关于 DeepSeek-V3

DeepSeek-V3 is a Mixture-of-Experts (MoE) language model comprising 671B parameters with 37B activated per token. Its architecture incorporates Multi-head Latent Attention and DeepSeekMoE for efficient inference and training. Innovations include an auxiliary-loss-free load balancing strategy and a multi-token prediction objective, trained on 14.8T tokens.


其他 DeepSeek-V3 模型

评估基准

排名

#18

基准分数排名

Professional Knowledge

MMLU Pro

0.85

🥇

1

0.85

6

0.73

8

0.77

10

Graduate-Level QA

GPQA

0.82

11

Web Development

WebDev Arena

1420

12

Agentic Coding

LiveBench Agentic

0.40

18

0.65

39

排名

排名

#18

编程排名

#56

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
63k
125k

所需显存:

推荐 GPU