ApX logo

Qwen2-0.5B

Parameters

0.5B

Context Length

32.768K

Modality

Text

Architecture

Dense

License

Apache 2.0

Release Date

7 Jun 2024

Knowledge Cutoff

-

Technical Specifications

Attention Structure

Grouped-Query Attention

Hidden Dimension Size

896

Number of Layers

24

Attention Heads

16

Key-Value Heads

8

Activation Function

SwigLU

Normalization

RMS Normalization

Position Embedding

ROPE

System Requirements

VRAM requirements for different quantization methods and context sizes

Qwen2-0.5B

The Qwen2-0.5B model represents a compact yet capable entry in the Qwen2 series of large language models, developed by the Qwen team at Alibaba. This model is engineered to deliver foundational language processing functionalities, making it suitable for deployment in environments with constrained computational resources. As a base language model, its primary purpose is to serve as a robust starting point for further specialization through post-training methodologies, such as supervised fine-tuning or reinforcement learning from human feedback. It is designed to facilitate a range of natural language processing tasks efficiently.

About Qwen2

The Alibaba Qwen2 model family comprises large language models built upon the Transformer architecture. It includes both dense and Mixture-of-Experts (MoE) variants, designed for diverse language tasks. Technical features include Grouped Query Attention and support for extended context lengths up to 131,072 tokens, optimizing memory footprint for inference.


Other Qwen2 Models

Evaluation Benchmarks

Ranking is for Local LLMs.

No evaluation benchmarks for Qwen2-0.5B available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
16k
32k

VRAM Required:

Recommended GPUs

Qwen2-0.5B: Specifications and GPU VRAM Requirements