Parameters
0.5B
Context Length
32.768K
Modality
Text
Architecture
Dense
License
Apache 2.0
Release Date
7 Jun 2024
Knowledge Cutoff
-
Attention Structure
Grouped-Query Attention
Hidden Dimension Size
896
Number of Layers
24
Attention Heads
16
Key-Value Heads
8
Activation Function
SwigLU
Normalization
RMS Normalization
Position Embedding
ROPE
VRAM requirements for different quantization methods and context sizes
The Qwen2-0.5B model represents a compact yet capable entry in the Qwen2 series of large language models, developed by the Qwen team at Alibaba. This model is engineered to deliver foundational language processing functionalities, making it suitable for deployment in environments with constrained computational resources. As a base language model, its primary purpose is to serve as a robust starting point for further specialization through post-training methodologies, such as supervised fine-tuning or reinforcement learning from human feedback. It is designed to facilitate a range of natural language processing tasks efficiently.
The Alibaba Qwen2 model family comprises large language models built upon the Transformer architecture. It includes both dense and Mixture-of-Experts (MoE) variants, designed for diverse language tasks. Technical features include Grouped Query Attention and support for extended context lengths up to 131,072 tokens, optimizing memory footprint for inference.
Ranking is for Local LLMs.
No evaluation benchmarks for Qwen2-0.5B available.
Overall Rank
-
Coding Rank
-
Full Calculator
Choose the quantization method for model weights
Context Size: 1,024 tokens