ApX logo

ChatGLM3-6B

Parameters

6B

Context Length

8.192K

Modality

Text

Architecture

Dense

License

-

Release Date

27 Oct 2023

Knowledge Cutoff

-

Technical Specifications

Attention Structure

Multi-Head Attention

Hidden Dimension Size

-

Number of Layers

-

Attention Heads

-

Key-Value Heads

-

Activation Function

-

Normalization

-

Position Embedding

Absolute Position Embedding

System Requirements

VRAM requirements for different quantization methods and context sizes

ChatGLM3-6B

Further improved ChatGLM with better performance and function calling.

About ChatGLM

ChatGLM series models from Z.ai, based on GLM architecture.


Other ChatGLM Models

Evaluation Benchmarks

No evaluation benchmarks for ChatGLM3-6B available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
4k
8k

VRAM Required:

Recommended GPUs

ChatGLM3-6B: Specifications and GPU VRAM Requirements