Parameters
6B
Context Length
32.768K
Modality
Text
Architecture
Dense
License
Custom License (ChatGLM2-6B License)
Release Date
25 Jun 2023
Knowledge Cutoff
-
Attention Structure
Multi-Head Attention
Hidden Dimension Size
-
Number of Layers
-
Attention Heads
-
Key-Value Heads
-
Activation Function
-
Normalization
-
Position Embedding
Absolute Position Embedding
VRAM requirements for different quantization methods and context sizes
ChatGLM2-6B represents the second iteration of the open-source bilingual (Chinese-English) ChatGLM model series, developed by THUDM. This model is engineered to provide robust conversational capabilities while maintaining a low computational footprint, enabling deployment on consumer-grade hardware. It serves as a foundational component for various natural language processing applications requiring fluent dialogue generation and question-answering in both Chinese and English contexts.
The architectural foundation of ChatGLM2-6B is rooted in the General Language Model (GLM) framework. A notable enhancement in this iteration is the integration of a hybrid objective function during pre-training, coupled with extensive training on 1.4 trillion bilingual tokens and human preference alignment. For efficiency in inference and context handling, the model incorporates FlashAttention technology, which expands its maximum context length from 2K to 32K tokens, with 8K context length used during the dialogue training phase. Furthermore, the adoption of Multi-Query Attention significantly improves inference speed and reduces GPU memory consumption, facilitating longer conversational turns within memory constraints.
ChatGLM2-6B is designed for diverse applications, including but not limited to, open-ended conversational agents, intelligent assistants, and systems requiring cross-lingual understanding and generation. Its optimized architecture allows for efficient execution on platforms with limited resources, such as consumer graphics cards, with INT4 quantization enabling deployment with as little as 6GB of GPU memory.
ChatGLM series models from Z.ai, based on GLM architecture.
Ranking is for Local LLMs.
No evaluation benchmarks for ChatGLM2-6B available.
Overall Rank
-
Coding Rank
-
Full Calculator
Choose the quantization method for model weights
Context Size: 1,024 tokens