ApX 标志

趋近智

Qwen2.5-72B

参数

72B

上下文长度

131.072K

模态

Text

架构

Dense

许可证

Qwen License

发布日期

19 Sept 2024

知识截止

Jan 2025

技术规格

注意力结构

Grouped-Query Attention

隐藏维度大小

12288

层数

80

注意力头

128

键值头

8

激活函数

SwigLU

归一化

RMS Normalization

位置嵌入

ROPE

系统要求

不同量化方法和上下文大小的显存要求

Qwen2.5-72B

Qwen2.5-72B is a core component of the Qwen2.5 series of large language models developed by Alibaba. This model is built upon a Transformer architecture and operates as a causal language model. Its design incorporates Rotary Position Embeddings (RoPE), SwiGLU as the activation function, and RMSNorm for normalization, complemented by an attention mechanism that includes QKV bias. These architectural choices provide a robust foundation for general-purpose language processing tasks.

The Qwen2.5-72B model features advancements compared to its predecessor, Qwen2. It exhibits enhanced capabilities in handling complex knowledge, excelling in areas such as coding and mathematics. The model also demonstrates improved instruction following, making it more adaptable to diverse user prompts and conditional scenarios. Its design focuses on practical applications requiring high fidelity in output generation.

This model is engineered for extensive text processing, supporting context lengths up to 131,072 tokens and generating outputs up to 8,192 tokens. It is proficient in generating long-form content, understanding structured data formats like tables, and producing structured outputs such as JSON. Additionally, Qwen2.5-72B provides multilingual support across more than 29 languages, making it suitable for a wide array of content generation, coding assistance, and advanced artificial intelligence applications like chatbots and virtual assistants.

关于 Qwen2.5

Qwen2.5 by Alibaba is a family of dense, decoder-only language models available in various sizes, with some variants utilizing Mixture-of-Experts. These models are pretrained on large-scale datasets, supporting extended context lengths and multilingual communication. The family includes specialized models for coding, mathematics, and multimodal tasks, such as vision and audio processing.


其他 Qwen2.5 模型

评估基准

排名适用于本地LLM。

排名

#22

基准分数排名

0.65

4

0.65

7

0.89

7

0.94

7

0.74

7

Professional Knowledge

MMLU Pro

0.71

9

0.57

14

Graduate-Level QA

GPQA

0.49

16

Agentic Coding

LiveBench Agentic

0.03

17

0.52

18

0.52

19

0.34

21

General Knowledge

MMLU

0.49

24

排名

排名

#22

编程排名

#7

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
64k
128k

所需显存:

推荐 GPU