ApX 标志

趋近智

DeepSeek-V3 671B

活跃参数

671B

上下文长度

131.072K

模态

Text

架构

Mixture of Experts (MoE)

许可证

DeepSeek Model License

发布日期

27 Dec 2024

知识截止

-

技术规格

专家参数总数

37.0B

专家数量

257

活跃专家

9

注意力结构

Multi-Layer Attention

隐藏维度大小

7168

层数

61

注意力头

128

键值头

128

激活函数

-

归一化

RMS Normalization

位置嵌入

ROPE

系统要求

不同量化方法和上下文大小的显存要求

DeepSeek-V3 671B

DeepSeek-V3 is a large-scale Mixture-of-Experts (MoE) language model, comprising a total of 671 billion parameters with 37 billion parameters activated per token during inference. This design prioritizes efficient inference and cost-effective training. The model was pre-trained on an extensive dataset of 14.8 trillion diverse and high-quality tokens. Subsequent training phases involved Supervised Fine-Tuning and Reinforcement Learning to further enhance its capabilities. DeepSeek-V3 represents an evolution in large language model design, building on previous architectural foundations while introducing novel advancements for efficiency.

The architectural core of DeepSeek-V3 integrates several innovations. It utilizes Multi-head Latent Attention (MLA), a mechanism designed to optimize attention operations by compressing key-value pairs into a low-dimensional latent space, thereby reducing memory consumption during inference. The Mixture-of-Experts component, termed DeepSeekMoE, employs 256 routed experts and 1 shared expert, with each token dynamically interacting with 8 specialized experts plus the single shared expert. A notable innovation in this MoE architecture is an auxiliary-loss-free strategy for load balancing, which aims to distribute computational load across experts without the performance degradation typically associated with auxiliary loss functions. Additionally, DeepSeek-V3 incorporates a Multi-Token Prediction (MTP) training objective, which densifies training signals and is observed to enhance overall model performance by training the model to predict multiple future tokens simultaneously. Training further leverages FP8 mixed precision, demonstrating its feasibility and effectiveness at an extremely large scale. The model employs Rotary Positional Embedding (RoPE) for handling positional information and RMSNorm for normalization within its layers.

DeepSeek-V3 is engineered to support a broad spectrum of general language tasks, exhibiting capabilities in areas such as mathematical problem-solving, advanced code development, and complex reasoning. Its design allows for the processing of extended contexts, supporting a context length of up to 128K tokens. This enables the model to handle long documents and complex multi-turn conversations effectively. The model's efficiency in both training and inference makes it suitable for applications requiring substantial computational capacity while maintaining resource optimization.

关于 DeepSeek-V3

DeepSeek-V3 is a Mixture-of-Experts (MoE) language model comprising 671B parameters with 37B activated per token. Its architecture incorporates Multi-head Latent Attention and DeepSeekMoE for efficient inference and training. Innovations include an auxiliary-loss-free load balancing strategy and a multi-token prediction objective, trained on 14.8T tokens.


其他 DeepSeek-V3 模型

评估基准

排名适用于本地LLM。

排名

#4

基准分数排名

0.98

🥇

1

0.73

🥉

3

0.81

🥉

3

Professional Knowledge

MMLU Pro

0.81

🥉

3

0.69

4

0.95

4

Web Development

WebDev Arena

1206.69

4

Agentic Coding

LiveBench Agentic

0.15

5

0.44

6

Graduate-Level QA

GPQA

0.68

6

0.71

10

0.64

10

General Knowledge

MMLU

0.68

12

0.44

15

排名

排名

#4

编程排名

#4

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
64k
128k

所需显存:

推荐 GPU