ApX logo

GLM-4

Parameters

32B

Context Length

128K

Modality

Text

Architecture

Dense

License

Custom Commercial License with Restrictions

Release Date

15 Jan 2024

Knowledge Cutoff

-

Technical Specifications

Attention Structure

Multi-Head Attention

Hidden Dimension Size

-

Number of Layers

-

Attention Heads

-

Key-Value Heads

-

Activation Function

-

Normalization

-

Position Embedding

Absolute Position Embedding

System Requirements

VRAM requirements for different quantization methods and context sizes

GLM-4

The GLM-4 32B parameter model is a member of the GLM-4 series of language models developed by Z.ai. This foundational model is engineered to provide advanced capabilities in language understanding and generation across a variety of applications. It serves as a base for further specialized models within the GLM-4 family, demonstrating broad applicability in text-based tasks.

From a technical perspective, the GLM-4 32B model undergoes extensive pre-training on a substantial corpus of high-quality data, encompassing approximately 15 trillion tokens, which includes a significant portion of synthetic reasoning data. The post-training phase incorporates advanced techniques such as human preference alignment, rejection sampling, and reinforcement learning. These methodologies are applied to refine the model's instruction following, code generation, and function calling abilities, thus fortifying its foundational atomic capabilities essential for complex agent-based applications. The architecture supports a context length of up to 128,000 tokens.

The model is designed for a range of practical use cases, including but not limited to engineering code generation, artifact creation, robust function calling, precise search-based question answering, and comprehensive report generation. Its development emphasizes robust performance in scenarios that demand intricate linguistic processing and logical inference, making it suitable for integration into systems requiring sophisticated natural language processing and agentic behaviors.

About GLM Family

General Language Models from Z.ai


Other GLM Family Models

Evaluation Benchmarks

Ranking is for Local LLMs.

No evaluation benchmarks for GLM-4 available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
63k
125k

VRAM Required:

Recommended GPUs

GLM-4: Specifications and GPU VRAM Requirements