趋近智
活跃参数
1T
上下文长度
512K
模态
Text
架构
Mixture of Experts (MoE)
许可证
Modified MIT License
发布日期
5 Feb 2026
训练数据截止日期
-
专家参数总数
-
专家数量
-
活跃专家
-
注意力结构
Multi-Head Attention
隐藏维度大小
-
层数
-
注意力头
-
键值头
-
激活函数
-
归一化
-
位置嵌入
Absolute Position Embedding
不同量化方法和上下文大小的显存要求
Kimi K2.5 is the latest long-context language model from Moonshot AI, released in early 2026. Built on a massive 1 trillion parameter MoE architecture, it supports context windows of up to 512,000 tokens. The model demonstrates exceptional performance in multimodal understanding and large-scale data synthesis.
Moonshot AI's Kimi K2 is a Mixture-of-Experts model featuring one trillion total parameters, activating 32 billion per token. Designed for agentic intelligence, it utilizes a sparse architecture with 384 experts and the MuonClip optimizer for training stability, supporting a 128K token context window.
没有可用的 Kimi K2.5 评估基准。