ApX logo

DeepSeek-R1 14B

Parameters

14B

Context Length

131.072K

Modality

Text

Architecture

Dense

License

MIT License

Release Date

27 Dec 2024

Knowledge Cutoff

Jul 2024

Technical Specifications

Attention Structure

Multi-Layer Attention

Hidden Dimension Size

5120

Number of Layers

40

Attention Heads

80

Key-Value Heads

80

Activation Function

SwigLU

Normalization

RMS Normalization

Position Embedding

ROPE

System Requirements

VRAM requirements for different quantization methods and context sizes

DeepSeek-R1 14B

DeepSeek-R1-Distill-Qwen-14B is a dense large language model within the DeepSeek-R1 series, engineered for advanced reasoning capabilities. This model is a product of distillation from the formidable 671B DeepSeek-R1 (a Mixture-of-Experts model), with its foundational architecture rooted in the Qwen 2.5 14B model. The primary objective of this distillation process is to efficiently transfer sophisticated reasoning skills, particularly in the domains of mathematics and coding, from the larger DeepSeek-R1 into a more compact and computationally efficient dense model.

The technical architecture of DeepSeek-R1-Distill-Qwen-14B is based on a transformer framework. It incorporates Rotary Position Embeddings (RoPE) for effective positional encoding, utilizes SwiGLU as its activation function, and employs RMSNorm for robust normalization. The attention mechanism includes QKV bias, characteristic of the Qwen 2.5 series from which it is derived. Unlike its larger DeepSeek-R1 progenitor, this variant maintains a dense architecture, optimizing for direct parameter utilization rather than expert sparsity.

This model is designed to support a substantial context length, accommodating up to 131,072 tokens, which facilitates the processing of extensive inputs. Its application extends across various natural language processing tasks, encompassing text generation, data analysis, and the synthesis of code. The model's heritage from DeepSeek-R1 underscores its proficiency in complex reasoning tasks, making it suitable for mathematical problem-solving and programming. Furthermore, it supports both few-shot and zero-shot learning paradigms and is optimized for local deployment, offering flexibility for integration into diverse applications via an API.

About DeepSeek-R1

DeepSeek-R1 is a model family developed for logical reasoning tasks. It incorporates a Mixture-of-Experts architecture for computational efficiency and scalability. The family utilizes Multi-Head Latent Attention and employs reinforcement learning in its training, with some variants integrating cold-start data.


Other DeepSeek-R1 Models

Evaluation Benchmarks

Ranking is for Local LLMs.

No evaluation benchmarks for DeepSeek-R1 14B available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
64k
128k

VRAM Required:

Recommended GPUs

DeepSeek-R1 14B: Specifications and GPU VRAM Requirements