ApX 标志

趋近智

Qwen2.5-0.5B

参数

500M

上下文长度

32.768K

模态

Text

架构

Dense

许可证

Apache 2.0

发布日期

19 Sept 2024

知识截止

-

技术规格

注意力结构

Grouped-Query Attention

隐藏维度大小

768

层数

24

注意力头

16

键值头

8

激活函数

SwigLU

归一化

RMS Normalization

位置嵌入

ROPE

系统要求

不同量化方法和上下文大小的显存要求

Qwen2.5-0.5B

Qwen2.5-0.5B is a foundational large language model developed by the Qwen team at Alibaba Cloud. It is part of the Qwen2.5 series, which represents an advancement in language model capabilities, featuring improvements in knowledge acquisition, coding proficiency, and mathematical reasoning. This variant, with approximately 0.49 billion parameters, serves as a robust base model, primarily designed for pretraining and subsequent fine-tuning for specialized applications. Its architecture is engineered to handle complex language tasks efficiently across multiple languages.

Architecturally, Qwen2.5-0.5B is a dense, decoder-only Transformer model. It incorporates Rotary Position Embedding (RoPE) for effective positional encoding, SwiGLU as its activation function, and RMSNorm for normalization. The attention mechanism utilizes Grouped Query Attention (GQA), specifically configured with 14 query heads and 2 key-value heads for this model size. The model is structured with 24 layers, contributing to its depth and capacity for learning intricate patterns in language data.

As a causal language model, Qwen2.5-0.5B is suitable for a range of downstream applications following post-training processes such as supervised fine-tuning or reinforcement learning from human feedback. Its capabilities include instruction following, generating extended text sequences, and processing structured data formats like JSON. The model supports a full context length of 32,768 tokens, with the broader Qwen2.5 series capable of handling contexts up to 128,000 tokens and generating outputs up to 8,000 tokens. It offers multilingual support, encompassing over 29 languages.

关于 Qwen2.5

Qwen2.5 by Alibaba is a family of dense, decoder-only language models available in various sizes, with some variants utilizing Mixture-of-Experts. These models are pretrained on large-scale datasets, supporting extended context lengths and multilingual communication. The family includes specialized models for coding, mathematics, and multimodal tasks, such as vision and audio processing.


其他 Qwen2.5 模型

评估基准

排名适用于本地LLM。

没有可用的 Qwen2.5-0.5B 评估基准。

排名

排名

-

编程排名

-

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
16k
32k

所需显存:

推荐 GPU