ApX 标志ApX 标志

趋近智

GLM-5

活跃参数

744B

上下文长度

204.8K

模态

Multimodal

架构

Mixture of Experts (MoE)

许可证

MIT

发布日期

12 Feb 2026

训练数据截止日期

Dec 2025

技术规格

专家参数总数

40.0B

专家数量

256

活跃专家

8

注意力结构

Multi-Head Attention

隐藏维度大小

-

层数

80

注意力头

-

键值头

-

激活函数

-

归一化

RMS Normalization

位置嵌入

Absolute Position Embedding

GLM-5

GLM-5 is a flagship multimodal foundation model developed by Z.ai, designed for complex systems engineering and long-horizon agentic workflows. Utilizing a Mixture-of-Experts (MoE) architecture, the model scales to 744 billion total parameters with approximately 40 billion parameters activated per token. This design facilitates high-capacity reasoning and specialized knowledge retrieval while maintaining the computational efficiency required for large-scale deployment. The model is trained on a massive 28.5 trillion token corpus, emphasizing high-quality code, technical documentation, and reasoning-dense data to support professional-grade software development and autonomous problem-solving.

Technically, GLM-5 introduces several architectural innovations, most notably the integration of DeepSeek Sparse Attention (DSA). This mechanism optimizes the standard attention block by dynamically allocating computational resources, which significantly reduces the memory and compute overhead associated with processing long sequences. Additionally, the model leverages an asynchronous reinforcement learning infrastructure known as 'slime' during post-training. This framework decouples generation from training to improve iteration throughput, allowing the model to learn effectively from complex, multi-step interactions and dynamic environments.

Optimized for long-context stability, GLM-5 supports a context window of up to 204,800 tokens and is capable of generating up to 128,000 tokens in a single output. Its operational capabilities include advanced tool-use, real-time streaming, and structured output across frontend, backend, and data processing tasks. The model is released with open weights under the MIT License, enabling researchers and developers to perform local serving, fine-tuning, and integration into diverse agentic frameworks without vendor lock-in.

关于 GLM 5

GLM 5 is the fifth generation of General Language Models developed by Z.ai. It represents a significant leap in multimodal foundational capabilities, featuring advanced reasoning and long-horizon agentic capabilities across diverse systems engineering tasks.


其他 GLM 5 模型
  • 没有相关模型

评估基准

排名

#16

基准分数排名

Agentic Coding

LiveBench Agentic

0.55

🥉

3

Web Development

WebDev Arena

1455

6

0.83

10

0.69

15

排名

排名

#16

编程排名

#15

模型透明度

总分

B+

79 / 100

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
100k
200k

所需显存:

推荐 GPU

GLM-5:规格和 GPU 显存要求