ApX logoApX logo

GLM-4-9B-Chat-1M

Parameters

9B

Context Length

1,000K

Modality

Text

Architecture

Dense

License

MIT License

Release Date

30 Jun 2024

Knowledge Cutoff

Jan 2024

Technical Specifications

Attention Structure

Multi-Head Attention

Hidden Dimension Size

4096

Number of Layers

40

Attention Heads

32

Key-Value Heads

2

Activation Function

SwigLU

Normalization

RMS Normalization

Position Embedding

Absolute Position Embedding

GLM-4-9B-Chat-1M

GLM-4-9B-Chat-1M is a specialized large language model within the GLM-4 family, developed by Zhipu AI to address the complexities of ultra-long sequence processing. This model variant is distinguished by its massive context window of 1,048,576 tokens, allowing it to ingest and reason over entire libraries of technical documentation, legal contracts, or multi-hour conversation transcripts. As a chat-optimized model, it is fine-tuned to follow complex instructions and engage in nuanced human-machine interactions while supporting integrated tool use such as web browsing and code execution.

Technically, the model utilizes a dense transformer architecture featuring 40 layers and a hidden dimensionality of 4096. To achieve its million-token context capacity, it employs an advanced positional encoding scheme combining Rotary Position Embeddings (RoPE) with the YaRN (Yet another RoPE N) scaling method. This configuration enables the model to maintain high retrieval accuracy across its entire context window, a capability often verified through needle-in-a-haystack evaluations. The architecture further incorporates RMSNorm for stable layer normalization and a Gated Linear Unit (GLU) with SwiGLU activation to optimize the feed-forward network's expressive power.

Operational flexibility is a core attribute of the GLM-4-9B-Chat-1M, as it is released with open weights under the Apache 2.0 license for the accompanying code and a permissive community license for the weights. It is designed to be compatible with the Hugging Face Transformers library and vLLM, facilitating deployment in diverse environments ranging from local research workstations to production inference servers. The model's multilingual capabilities extend to 26 languages, making it a versatile asset for global applications requiring deep semantic understanding and long-form document synthesis.

About GLM Family

General Language Models from Z.ai


Other GLM Family Models

Evaluation Benchmarks

No evaluation benchmarks for GLM-4-9B-Chat-1M available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
488k
977k

VRAM Required:

Recommended GPUs

GLM-4-9B-Chat-1M: Specifications and GPU VRAM Requirements