ApX logo

DeepSeek-R1 1.5B

Parameters

1.5B

Context Length

131.072K

Modality

Text

Architecture

Dense

License

MIT

Release Date

27 Dec 2024

Knowledge Cutoff

-

Technical Specifications

Attention Structure

Multi-Layer Attention

Hidden Dimension Size

2048

Number of Layers

28

Attention Heads

32

Key-Value Heads

32

Activation Function

SwigLU

Normalization

RMS Normalization

Position Embedding

ROPE

System Requirements

VRAM requirements for different quantization methods and context sizes

DeepSeek-R1 1.5B

DeepSeek-R1 is a family of reasoning-focused large language models developed by DeepSeek AI. The DeepSeek-R1-Distill-Qwen-1.5B variant represents a compact model within this family, specifically engineered to distill the complex reasoning capabilities of larger DeepSeek-R1 models into a more parameter-efficient architecture. This model is fine-tuned using extensive reasoning data generated by the higher-capacity DeepSeek-R1 models. Its primary purpose is to provide advanced language understanding and reasoning abilities in a form factor suitable for deployment in environments with more constrained computational resources.

The DeepSeek-R1-Distill-Qwen-1.5B model is constructed upon a Transformer-based architecture, deriving its foundational structure from the Qwen2.5-Math-1.5B base. This architecture integrates several key components for efficient operation, including Rotary Position Embedding (RoPE) for handling sequence length, the SwiGLU activation function, and RMSNorm for stable training. While the broader DeepSeek-R1 framework employs a Mixture-of-Experts (MoE) design, the 1.5B distilled variant utilizes a dense architecture. Its attention mechanism leverages Grouped Query Attention (GQA), which optimizes the computational efficiency of the attention process by sharing key and value projections across multiple attention heads, thereby reducing memory bandwidth requirements during inference.

This model is designed to facilitate robust performance in tasks demanding logical inference and step-by-step problem-solving. It is particularly applicable to domains such as mathematical problem-solving, code comprehension, and general text-based reasoning. The compact parameter size of the DeepSeek-R1-Distill-Qwen-1.5B model makes it suitable for deployment on standard consumer-grade hardware or edge devices, enabling local execution without extensive computational infrastructure. This characteristic broadens accessibility for researchers and developers seeking to integrate advanced reasoning functionalities into resource-sensitive applications.

About DeepSeek-R1

DeepSeek-R1 is a model family developed for logical reasoning tasks. It incorporates a Mixture-of-Experts architecture for computational efficiency and scalability. The family utilizes Multi-Head Latent Attention and employs reinforcement learning in its training, with some variants integrating cold-start data.


Other DeepSeek-R1 Models

Evaluation Benchmarks

Ranking is for Local LLMs.

No evaluation benchmarks for DeepSeek-R1 1.5B available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
64k
128k

VRAM Required:

Recommended GPUs