ApX 标志

趋近智

Qwen2.5-3B

参数

3B

上下文长度

32.768K

模态

Text

架构

Dense

许可证

Qwen Research License Agreement

发布日期

19 Sept 2024

知识截止

-

技术规格

注意力结构

Grouped-Query Attention

隐藏维度大小

2304

层数

36

注意力头

48

键值头

8

激活函数

SwigLU

归一化

RMS Normalization

位置嵌入

ROPE

系统要求

不同量化方法和上下文大小的显存要求

Qwen2.5-3B

Qwen2.5-3B is a foundational large language model developed by Alibaba Cloud, forming a part of the broader Qwen2.5 series. This model is primarily designed for advanced natural language processing tasks, serving as a robust base model that can be further fine-tuned for specific applications. Its core purpose is to process and generate human-like text, with capabilities extended to more complex domains such as programming and mathematical problem-solving through specialized variants.

The architectural design of Qwen2.5-3B is based on the Transformer framework, integrating several key innovations for enhanced performance and efficiency. It incorporates Rotary Position Embedding (RoPE) for effective handling of sequence positions, SwiGLU as its activation function for improved non-linearity, and RMSNorm for stable normalization across layers. The model employs Grouped-Query Attention (GQA), specifically configured with 16 query heads and 2 key-value heads, which optimizes inference efficiency by reducing the memory footprint of key and value caches during sequence generation. Comprising 36 layers and a total of 3.09 billion parameters, this dense architecture is engineered for a balance of capability and computational feasibility.

Qwen2.5-3B supports a substantial context length of up to 32,768 tokens, enabling the processing of extensive textual inputs while maintaining coherence. For certain applications or instruction-tuned versions, it can support contexts up to 128,000 tokens. The model demonstrates proficiency in instruction following and the generation of structured outputs, such as JSON. It offers broad multilingual support, encompassing over 29 languages, making it suitable for global applications requiring diverse language understanding and generation capabilities. Its design focuses on providing a capable foundation for various text-based AI applications.

关于 Qwen2.5

Qwen2.5 by Alibaba is a family of dense, decoder-only language models available in various sizes, with some variants utilizing Mixture-of-Experts. These models are pretrained on large-scale datasets, supporting extended context lengths and multilingual communication. The family includes specialized models for coding, mathematics, and multimodal tasks, such as vision and audio processing.


其他 Qwen2.5 模型

评估基准

排名适用于本地LLM。

排名

#39

基准分数排名

0.39

11

0.39

14

排名

排名

#39

编程排名

#37

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
16k
32k

所需显存:

推荐 GPU

Qwen2.5-3B: Specifications and GPU VRAM Requirements