ApX 标志

趋近智

ChatGLM-6B

参数

6B

上下文长度

2.048K

模态

Text

架构

Dense

许可证

Apache 2.0

发布日期

14 Mar 2023

知识截止

-

技术规格

注意力结构

Multi-Head Attention

隐藏维度大小

4096

层数

28

注意力头

32

键值头

32

激活函数

GELU

归一化

Layer Normalization

位置嵌入

Absolute Position Embedding

系统要求

不同量化方法和上下文大小的显存要求

ChatGLM-6B

ChatGLM-6B is an open-source, bilingual (Chinese and English) dialogue language model developed by Tsinghua University's KEG Lab and Zhipu AI. It is built upon the General Language Model (GLM) architecture. The model's primary objective is to facilitate conversational AI tasks, with a specific optimization for Chinese question answering and dialogue. A key design consideration for ChatGLM-6B was its accessibility for local deployment on consumer-grade hardware, enabling operation with as little as 6GB of GPU memory when utilizing INT4 quantization.

The model employs a Transformer-based architecture, deriving its foundational design from the GLM framework. During its pre-training phase, ChatGLM-6B incorporated a hybrid objective function. The training regimen involved a substantial corpus of approximately 1 trillion tokens, comprising both Chinese and English languages. Furthermore, the development process integrated advanced techniques such as supervised fine-tuning, feedback bootstrap, and reinforcement learning with human feedback to align the model's outputs with human preferences. The underlying GLM architecture supports a 2D positional encoding scheme.

Despite its relatively compact size of 6.2 billion parameters, ChatGLM-6B demonstrates capabilities in generating coherent and contextually relevant responses. Its architecture emphasizes computational efficiency, allowing for deployment and inference on common GPU configurations, which broadens its applicability for researchers and developers. The model is suitable for a range of natural language processing tasks, including but not limited to machine translation, general question answering systems, and the construction of interactive chatbot applications, particularly in bilingual contexts involving Chinese and English.

关于 ChatGLM

ChatGLM series models from Z.ai, based on GLM architecture.


其他 ChatGLM 模型

评估基准

排名适用于本地LLM。

没有可用的 ChatGLM-6B 评估基准。

排名

排名

-

编程排名

-

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
1k
2k

所需显存:

推荐 GPU