趋近智
参数
6B
上下文长度
32.768K
模态
Text
架构
Dense
许可证
Custom License (ChatGLM2-6B License)
发布日期
25 Jun 2023
知识截止
-
注意力结构
Multi-Head Attention
隐藏维度大小
-
层数
-
注意力头
-
键值头
-
激活函数
-
归一化
-
位置嵌入
Absolute Position Embedding
不同量化方法和上下文大小的显存要求
ChatGLM2-6B represents the second iteration of the open-source bilingual (Chinese-English) ChatGLM model series, developed by THUDM. This model is engineered to provide robust conversational capabilities while maintaining a low computational footprint, enabling deployment on consumer-grade hardware. It serves as a foundational component for various natural language processing applications requiring fluent dialogue generation and question-answering in both Chinese and English contexts.
The architectural foundation of ChatGLM2-6B is rooted in the General Language Model (GLM) framework. A notable enhancement in this iteration is the integration of a hybrid objective function during pre-training, coupled with extensive training on 1.4 trillion bilingual tokens and human preference alignment. For efficiency in inference and context handling, the model incorporates FlashAttention technology, which expands its maximum context length from 2K to 32K tokens, with 8K context length used during the dialogue training phase. Furthermore, the adoption of Multi-Query Attention significantly improves inference speed and reduces GPU memory consumption, facilitating longer conversational turns within memory constraints.
ChatGLM2-6B is designed for diverse applications, including but not limited to, open-ended conversational agents, intelligent assistants, and systems requiring cross-lingual understanding and generation. Its optimized architecture allows for efficient execution on platforms with limited resources, such as consumer graphics cards, with INT4 quantization enabling deployment with as little as 6GB of GPU memory.
ChatGLM series models from Z.ai, based on GLM architecture.
排名适用于本地LLM。
没有可用的 ChatGLM2-6B 评估基准。