ApX logo

GLM-130B

Parameters

130B

Context Length

2.048K

Modality

Text

Architecture

Dense

License

Apache 2.0

Release Date

4 Aug 2022

Knowledge Cutoff

Jul 2022

Technical Specifications

Attention Structure

Multi-Head Attention

Hidden Dimension Size

12288

Number of Layers

70

Attention Heads

-

Key-Value Heads

-

Activation Function

GELU

Normalization

Deep Normalization

Position Embedding

Absolute Position Embedding

System Requirements

VRAM requirements for different quantization methods and context sizes

GLM-130B

GLM-130B is a bidirectional dense model featuring 130 billion parameters, developed for both English and Chinese language processing. This model is pre-trained using the General Language Model (GLM) algorithm, which employs an autoregressive blank infilling objective. This pre-training approach involves masking random continuous spans of text and subsequently predicting these masked segments autoregressively. This methodology contributes to its performance in various natural language processing tasks, including text comprehension, generation, and translation.

The architectural design of GLM-130B incorporates specific innovations to enhance training stability and inference efficiency for a model of its scale. It utilizes Rotary Positional Encoding (RoPE) for positional embeddings and integrates the Gated Linear Unit (GLU) with the Gaussian Error Linear Unit (GeLU) activation function within its Feed-Forward Networks (FFNs). The model also employs DeepNorm for layer normalization, a Post-Layer Normalization (Post-LN) technique, which has been shown to stabilize the training of large language models.

GLM-130B supports fast inference, making it suitable for real-time large-scale language processing tasks. It is designed to enable inference on a single A100 (40G * 8) or V100 (32G * 8) server. Further optimizations, such as INT4 quantization, allow for efficient inference on more accessible hardware, including a single server equipped with 4 RTX 3090 (24G) GPUs with minimal performance degradation. The model has been trained on over 400 billion text tokens, with an equal distribution of English and Chinese data.

About GLM Family

General Language Models from Z.ai


Other GLM Family Models

Evaluation Benchmarks

Ranking is for Local LLMs.

No evaluation benchmarks for GLM-130B available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
1k
2k

VRAM Required:

Recommended GPUs