ApX 标志ApX 标志

趋近智

DeepSeek-V3.2 Thinking

活跃参数

671B

上下文长度

128K

模态

Text

架构

Mixture of Experts (MoE)

许可证

MIT

发布日期

10 Jan 2026

训练数据截止日期

-

技术规格

专家参数总数

37.0B

专家数量

-

活跃专家

-

注意力结构

Multi-Head Attention

隐藏维度大小

-

层数

-

注意力头

-

键值头

-

激活函数

-

归一化

-

位置嵌入

Absolute Position Embedding

系统要求

不同量化方法和上下文大小的显存要求

DeepSeek-V3.2 Thinking

DeepSeek-V3.2 Thinking is the reasoning-enhanced variant of DeepSeek-V3.2, specifically optimized for complex problem-solving through chain-of-thought reasoning. Based on the same 671B parameter MoE architecture with 37B activated parameters, this model is fine-tuned to produce detailed reasoning traces before generating final answers. Excels at multi-step logical reasoning, mathematical proofs, algorithmic problem-solving, and tasks requiring explicit step-by-step thinking. Achieves enhanced performance on reasoning benchmarks: 94.8% on MATH-500 (with reasoning), 85.2% on Codeforces, and 73.4% on AIME. The thinking mode provides transparency into the model's reasoning process, making it ideal for educational applications, research, debugging complex logic, and scenarios where interpretability is crucial. Supports 128k context window with strong multilingual reasoning capabilities. MIT licensed.

关于 DeepSeek-V3

DeepSeek-V3 is a Mixture-of-Experts (MoE) language model comprising 671B parameters with 37B activated per token. Its architecture incorporates Multi-head Latent Attention and DeepSeekMoE for efficient inference and training. Innovations include an auxiliary-loss-free load balancing strategy and a multi-token prediction objective, trained on 14.8T tokens.


其他 DeepSeek-V3 模型

评估基准

排名

#21

基准分数排名

0.73

6

0.85

8

0.77

14

Agentic Coding

LiveBench Agentic

0.40

18

0.70

31

Graduate-Level QA

GPQA

0.82

42

排名

排名

#21

编程排名

#35

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
63k
125k

所需显存:

推荐 GPU