ApX 标志

趋近智

Qwen2-0.5B

参数

0.5B

上下文长度

32.768K

模态

Text

架构

Dense

许可证

Apache 2.0

发布日期

7 Jun 2024

知识截止

-

技术规格

注意力结构

Grouped-Query Attention

隐藏维度大小

896

层数

24

注意力头

16

键值头

8

激活函数

SwigLU

归一化

RMS Normalization

位置嵌入

ROPE

系统要求

不同量化方法和上下文大小的显存要求

Qwen2-0.5B

The Qwen2-0.5B model represents a compact yet capable entry in the Qwen2 series of large language models, developed by the Qwen team at Alibaba. This model is engineered to deliver foundational language processing functionalities, making it suitable for deployment in environments with constrained computational resources. As a base language model, its primary purpose is to serve as a robust starting point for further specialization through post-training methodologies, such as supervised fine-tuning or reinforcement learning from human feedback. It is designed to facilitate a range of natural language processing tasks efficiently.

关于 Qwen2

The Alibaba Qwen2 model family comprises large language models built upon the Transformer architecture. It includes both dense and Mixture-of-Experts (MoE) variants, designed for diverse language tasks. Technical features include Grouped Query Attention and support for extended context lengths up to 131,072 tokens, optimizing memory footprint for inference.


其他 Qwen2 模型

评估基准

排名适用于本地LLM。

没有可用的 Qwen2-0.5B 评估基准。

排名

排名

-

编程排名

-

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
16k
32k

所需显存:

推荐 GPU