ApX 标志

趋近智

ChatGLM3-6B-32K

参数

6B

上下文长度

32.768K

模态

Text

架构

Dense

许可证

ChatGLM3-6B Model License

发布日期

27 Oct 2023

知识截止

-

技术规格

注意力结构

Multi-Head Attention

隐藏维度大小

-

层数

-

注意力头

-

键值头

-

激活函数

-

归一化

-

位置嵌入

Absolute Position Embedding

系统要求

不同量化方法和上下文大小的显存要求

ChatGLM3-6B-32K

ChatGLM3-6B-32K is an advanced large language model developed jointly by Zhipu AI and Tsinghua University's KEG Lab. This variant builds upon the foundation of ChatGLM3-6B, specifically enhancing its capabilities for processing and understanding long textual contexts. The model is engineered to effectively manage input sequences up to 32,768 tokens in length, a significant extension compared to the 8,000-token context of its predecessor.

The architectural design of ChatGLM3-6B-32K is based on the transformer framework, a common paradigm in large language models. Key innovations in this variant include updated position encoding mechanisms and a specialized training methodology tailored for long-text scenarios. This targeted approach during the conversation stage, utilizing the full 32K context length, allows the model to maintain coherence and accuracy over extended dialogues and documents.

ChatGLM3-6B-32K is designed for applications requiring deep understanding and generation of human-like text across extensive content. It natively supports various complex functionalities such as tool invocation (Function Call), code execution (Code Interpreter), and Agent tasks. This versatility makes it suitable for diverse use cases including long-form conversations, comprehensive text analysis of articles or documents, and the generation of detailed content based on provided prompts.

关于 ChatGLM

ChatGLM series models from Z.ai, based on GLM architecture.


其他 ChatGLM 模型

评估基准

排名适用于本地LLM。

没有可用的 ChatGLM3-6B-32K 评估基准。

排名

排名

-

编程排名

-

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
16k
32k

所需显存:

推荐 GPU