ApX 标志

趋近智

GLM-130B

参数

130B

上下文长度

2.048K

模态

Text

架构

Dense

许可证

Apache 2.0

发布日期

4 Aug 2022

知识截止

Jul 2022

技术规格

注意力结构

Multi-Head Attention

隐藏维度大小

12288

层数

70

注意力头

-

键值头

-

激活函数

GELU

归一化

Deep Normalization

位置嵌入

Absolute Position Embedding

系统要求

不同量化方法和上下文大小的显存要求

GLM-130B

GLM-130B is a bidirectional dense model featuring 130 billion parameters, developed for both English and Chinese language processing. This model is pre-trained using the General Language Model (GLM) algorithm, which employs an autoregressive blank infilling objective. This pre-training approach involves masking random continuous spans of text and subsequently predicting these masked segments autoregressively. This methodology contributes to its performance in various natural language processing tasks, including text comprehension, generation, and translation.

The architectural design of GLM-130B incorporates specific innovations to enhance training stability and inference efficiency for a model of its scale. It utilizes Rotary Positional Encoding (RoPE) for positional embeddings and integrates the Gated Linear Unit (GLU) with the Gaussian Error Linear Unit (GeLU) activation function within its Feed-Forward Networks (FFNs). The model also employs DeepNorm for layer normalization, a Post-Layer Normalization (Post-LN) technique, which has been shown to stabilize the training of large language models.

GLM-130B supports fast inference, making it suitable for real-time large-scale language processing tasks. It is designed to enable inference on a single A100 (40G * 8) or V100 (32G * 8) server. Further optimizations, such as INT4 quantization, allow for efficient inference on more accessible hardware, including a single server equipped with 4 RTX 3090 (24G) GPUs with minimal performance degradation. The model has been trained on over 400 billion text tokens, with an equal distribution of English and Chinese data.

关于 GLM Family

General Language Models from Z.ai


其他 GLM Family 模型

评估基准

排名适用于本地LLM。

没有可用的 GLM-130B 评估基准。

排名

排名

-

编程排名

-

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
1k
2k

所需显存:

推荐 GPU