ApX 标志ApX 标志

趋近智

DeepSeek-V3.2

活跃参数

671B

上下文长度

128K

模态

Text

架构

Mixture of Experts (MoE)

许可证

MIT

发布日期

10 Jan 2026

训练数据截止日期

-

技术规格

专家参数总数

37.0B

专家数量

-

活跃专家

-

注意力结构

Multi-Head Attention

隐藏维度大小

-

层数

-

注意力头

-

键值头

-

激活函数

-

归一化

-

位置嵌入

Absolute Position Embedding

系统要求

不同量化方法和上下文大小的显存要求

DeepSeek-V3.2

DeepSeek-V3.2 is a powerful open-source Mixture-of-Experts (MoE) language model with 671B total parameters and 37B activated parameters per token. Built with an innovative architecture combining Multi-head Latent Attention (MLA) and DeepSeekMoE for efficient inference. Achieves exceptional performance across multiple benchmarks: 90.2% on MMLU-Pro, 84.5% on GPQA Diamond, 91.6% on MATH-500, 78.1% on Codeforces, and 92.3% on HumanEval. Supports 128k context window with strong multilingual capabilities. Features superior coding abilities, advanced mathematical reasoning, and competitive performance with leading closed-source models. Trained on 14.8 trillion diverse, high-quality tokens. MIT licensed for both research and commercial use. Ideal for complex reasoning, code generation, mathematical problem-solving, and general-purpose language understanding tasks.

关于 DeepSeek-V3

DeepSeek-V3 is a Mixture-of-Experts (MoE) language model comprising 671B parameters with 37B activated per token. Its architecture incorporates Multi-head Latent Attention and DeepSeekMoE for efficient inference and training. Innovations include an auxiliary-loss-free load balancing strategy and a multi-token prediction objective, trained on 14.8T tokens.


其他 DeepSeek-V3 模型

评估基准

排名

#38

基准分数排名

0.74

7

Agentic Coding

LiveBench Agentic

0.47

14

0.76

15

0.67

34

0.46

37

0.64

40

Graduate-Level QA

GPQA

0.8

50

排名

排名

#38

编程排名

#12

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
63k
125k

所需显存:

推荐 GPU