趋近智
参数
130B
上下文长度
2.048K
模态
Text
架构
Dense
许可证
Apache 2.0
发布日期
4 Aug 2022
知识截止
Jul 2022
注意力结构
Multi-Head Attention
隐藏维度大小
12288
层数
70
注意力头
-
键值头
-
激活函数
GELU
归一化
Deep Normalization
位置嵌入
Absolute Position Embedding
不同量化方法和上下文大小的显存要求
GLM-130B is a bidirectional dense model featuring 130 billion parameters, developed for both English and Chinese language processing. This model is pre-trained using the General Language Model (GLM) algorithm, which employs an autoregressive blank infilling objective. This pre-training approach involves masking random continuous spans of text and subsequently predicting these masked segments autoregressively. This methodology contributes to its performance in various natural language processing tasks, including text comprehension, generation, and translation.
The architectural design of GLM-130B incorporates specific innovations to enhance training stability and inference efficiency for a model of its scale. It utilizes Rotary Positional Encoding (RoPE) for positional embeddings and integrates the Gated Linear Unit (GLU) with the Gaussian Error Linear Unit (GeLU) activation function within its Feed-Forward Networks (FFNs). The model also employs DeepNorm for layer normalization, a Post-Layer Normalization (Post-LN) technique, which has been shown to stabilize the training of large language models.
GLM-130B supports fast inference, making it suitable for real-time large-scale language processing tasks. It is designed to enable inference on a single A100 (40G * 8) or V100 (32G * 8) server. Further optimizations, such as INT4 quantization, allow for efficient inference on more accessible hardware, including a single server equipped with 4 RTX 3090 (24G) GPUs with minimal performance degradation. The model has been trained on over 400 billion text tokens, with an equal distribution of English and Chinese data.
General Language Models from Z.ai
排名适用于本地LLM。
没有可用的 GLM-130B 评估基准。