Parameters
6B
Context Length
8.192K
Modality
Text
Architecture
Dense
License
-
Release Date
27 Oct 2023
Knowledge Cutoff
-
Attention Structure
Multi-Head Attention
Hidden Dimension Size
-
Number of Layers
-
Attention Heads
-
Key-Value Heads
-
Activation Function
-
Normalization
-
Position Embedding
Absolute Position Embedding
VRAM requirements for different quantization methods and context sizes
Further improved ChatGLM with better performance and function calling.
ChatGLM series models from Z.ai, based on GLM architecture.
Ranking is for Local LLMs.
No evaluation benchmarks for ChatGLM3-6B available.
Overall Rank
-
Coding Rank
-
Full Calculator
Choose the quantization method for model weights
Context Size: 1,024 tokens