ApX logo

DeepSeek-V3 671B

Active Parameters

671B

Context Length

131.072K

Modality

Text

Architecture

Mixture of Experts (MoE)

License

DeepSeek Model License

Release Date

27 Dec 2024

Knowledge Cutoff

-

Technical Specifications

Total Expert Parameters

37.0B

Number of Experts

257

Active Experts

9

Attention Structure

Multi-Layer Attention

Hidden Dimension Size

7168

Number of Layers

61

Attention Heads

128

Key-Value Heads

128

Activation Function

-

Normalization

RMS Normalization

Position Embedding

ROPE

System Requirements

VRAM requirements for different quantization methods and context sizes

DeepSeek-V3 671B

DeepSeek-V3 is a large-scale Mixture-of-Experts (MoE) language model, comprising a total of 671 billion parameters with 37 billion parameters activated per token during inference. This design prioritizes efficient inference and cost-effective training. The model was pre-trained on an extensive dataset of 14.8 trillion diverse and high-quality tokens. Subsequent training phases involved Supervised Fine-Tuning and Reinforcement Learning to further enhance its capabilities. DeepSeek-V3 represents an evolution in large language model design, building on previous architectural foundations while introducing novel advancements for efficiency.

The architectural core of DeepSeek-V3 integrates several innovations. It utilizes Multi-head Latent Attention (MLA), a mechanism designed to optimize attention operations by compressing key-value pairs into a low-dimensional latent space, thereby reducing memory consumption during inference. The Mixture-of-Experts component, termed DeepSeekMoE, employs 256 routed experts and 1 shared expert, with each token dynamically interacting with 8 specialized experts plus the single shared expert. A notable innovation in this MoE architecture is an auxiliary-loss-free strategy for load balancing, which aims to distribute computational load across experts without the performance degradation typically associated with auxiliary loss functions. Additionally, DeepSeek-V3 incorporates a Multi-Token Prediction (MTP) training objective, which densifies training signals and is observed to enhance overall model performance by training the model to predict multiple future tokens simultaneously. Training further leverages FP8 mixed precision, demonstrating its feasibility and effectiveness at an extremely large scale. The model employs Rotary Positional Embedding (RoPE) for handling positional information and RMSNorm for normalization within its layers.

DeepSeek-V3 is engineered to support a broad spectrum of general language tasks, exhibiting capabilities in areas such as mathematical problem-solving, advanced code development, and complex reasoning. Its design allows for the processing of extended contexts, supporting a context length of up to 128K tokens. This enables the model to handle long documents and complex multi-turn conversations effectively. The model's efficiency in both training and inference makes it suitable for applications requiring substantial computational capacity while maintaining resource optimization.

About DeepSeek-V3

DeepSeek-V3 is a Mixture-of-Experts (MoE) language model comprising 671B parameters with 37B activated per token. Its architecture incorporates Multi-head Latent Attention and DeepSeekMoE for efficient inference and training. Innovations include an auxiliary-loss-free load balancing strategy and a multi-token prediction objective, trained on 14.8T tokens.


Other DeepSeek-V3 Models

Evaluation Benchmarks

Ranking is for Local LLMs.

Rank

#4

BenchmarkScoreRank

0.98

πŸ₯‡

1

0.73

πŸ₯‰

3

0.81

πŸ₯‰

3

Professional Knowledge

MMLU Pro

0.81

πŸ₯‰

3

0.69

⭐

4

0.95

4

Web Development

WebDev Arena

1206.69

4

Agentic Coding

LiveBench Agentic

0.15

5

0.44

6

Graduate-Level QA

GPQA

0.68

6

0.71

10

0.64

10

General Knowledge

MMLU

0.68

12

0.44

15

Rankings

Overall Rank

#4

Coding Rank

#4

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
64k
128k

VRAM Required:

Recommended GPUs