ApX 标志

趋近智

GLM-4

参数

32B

上下文长度

128K

模态

Text

架构

Dense

许可证

Custom Commercial License with Restrictions

发布日期

15 Jan 2024

知识截止

-

技术规格

注意力结构

Multi-Head Attention

隐藏维度大小

-

层数

-

注意力头

-

键值头

-

激活函数

-

归一化

-

位置嵌入

Absolute Position Embedding

系统要求

不同量化方法和上下文大小的显存要求

GLM-4

The GLM-4 32B parameter model is a member of the GLM-4 series of language models developed by Z.ai. This foundational model is engineered to provide advanced capabilities in language understanding and generation across a variety of applications. It serves as a base for further specialized models within the GLM-4 family, demonstrating broad applicability in text-based tasks.

From a technical perspective, the GLM-4 32B model undergoes extensive pre-training on a substantial corpus of high-quality data, encompassing approximately 15 trillion tokens, which includes a significant portion of synthetic reasoning data. The post-training phase incorporates advanced techniques such as human preference alignment, rejection sampling, and reinforcement learning. These methodologies are applied to refine the model's instruction following, code generation, and function calling abilities, thus fortifying its foundational atomic capabilities essential for complex agent-based applications. The architecture supports a context length of up to 128,000 tokens.

The model is designed for a range of practical use cases, including but not limited to engineering code generation, artifact creation, robust function calling, precise search-based question answering, and comprehensive report generation. Its development emphasizes robust performance in scenarios that demand intricate linguistic processing and logical inference, making it suitable for integration into systems requiring sophisticated natural language processing and agentic behaviors.

关于 GLM Family

General Language Models from Z.ai


其他 GLM Family 模型

评估基准

排名适用于本地LLM。

没有可用的 GLM-4 评估基准。

排名

排名

-

编程排名

-

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
63k
125k

所需显存:

推荐 GPU