ApX logo

ChatGLM2-6B

Parameters

6B

Context Length

32.768K

Modality

Text

Architecture

Dense

License

Custom License (ChatGLM2-6B License)

Release Date

25 Jun 2023

Knowledge Cutoff

-

Technical Specifications

Attention Structure

Multi-Head Attention

Hidden Dimension Size

-

Number of Layers

-

Attention Heads

-

Key-Value Heads

-

Activation Function

-

Normalization

-

Position Embedding

Absolute Position Embedding

System Requirements

VRAM requirements for different quantization methods and context sizes

ChatGLM2-6B

ChatGLM2-6B represents the second iteration of the open-source bilingual (Chinese-English) ChatGLM model series, developed by THUDM. This model is engineered to provide robust conversational capabilities while maintaining a low computational footprint, enabling deployment on consumer-grade hardware. It serves as a foundational component for various natural language processing applications requiring fluent dialogue generation and question-answering in both Chinese and English contexts.

The architectural foundation of ChatGLM2-6B is rooted in the General Language Model (GLM) framework. A notable enhancement in this iteration is the integration of a hybrid objective function during pre-training, coupled with extensive training on 1.4 trillion bilingual tokens and human preference alignment. For efficiency in inference and context handling, the model incorporates FlashAttention technology, which expands its maximum context length from 2K to 32K tokens, with 8K context length used during the dialogue training phase. Furthermore, the adoption of Multi-Query Attention significantly improves inference speed and reduces GPU memory consumption, facilitating longer conversational turns within memory constraints.

ChatGLM2-6B is designed for diverse applications, including but not limited to, open-ended conversational agents, intelligent assistants, and systems requiring cross-lingual understanding and generation. Its optimized architecture allows for efficient execution on platforms with limited resources, such as consumer graphics cards, with INT4 quantization enabling deployment with as little as 6GB of GPU memory.

About ChatGLM

ChatGLM series models from Z.ai, based on GLM architecture.


Other ChatGLM Models

Evaluation Benchmarks

Ranking is for Local LLMs.

No evaluation benchmarks for ChatGLM2-6B available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
16k
32k

VRAM Required:

Recommended GPUs

ChatGLM2-6B: Specifications and GPU VRAM Requirements