Active Parameters
1T
Context Length
512K
Modality
Text
Architecture
Mixture of Experts (MoE)
License
Modified MIT License
Release Date
5 Feb 2026
Knowledge Cutoff
Oct 2025
Total Expert Parameters
968.0B
Number of Experts
384
Active Experts
8
Attention Structure
Multi-Head Attention
Hidden Dimension Size
7168
Number of Layers
61
Attention Heads
64
Key-Value Heads
-
Activation Function
SwigLU
Normalization
RMS Normalization
Position Embedding
Absolute Position Embedding
Kimi K2.5 is a high-capacity Mixture-of-Experts (MoE) large language model developed by Moonshot AI, designed to address complex reasoning and multimodal tasks at scale. The model is built on a massive 1-trillion parameter architecture that employs a sparse activation strategy, utilizing only 32 billion active parameters per forward pass to maintain computational efficiency while providing deep representational capacity. It distinguishes itself through its native multimodal training, where vision and language components are co-trained from the initial pre-training phase on approximately 15 trillion tokens, enabling unified processing of visual data and textual information.
Technically, Kimi K2.5 integrates several architectural innovations, most notably the use of Multi-head Latent Attention (MLA) and a specialized 384-expert MoE structure. The attention mechanism is optimized for high-throughput inference and long-context performance, supporting context windows up to 256,000 tokens. The model also introduces an 'Agent Swarm' paradigm, a self-directed multi-agent orchestration system trained via Parallel Agent Reinforcement Learning (PARL). This allows the model to decompose complex objectives into independent sub-tasks executed by up to 100 parallel sub-agents, significantly reducing serial execution latency in tool-heavy workflows.
In practical application, Kimi K2.5 functions as a versatile engine for advanced coding, document synthesis, and automated reasoning. It features four distinct operational modes, Instant, Thinking, Agent, and Agent Swarm, allowing users to balance response speed and reasoning depth based on the task requirement. Its native visual coding capabilities allow for the direct translation of UI designs and video workflows into functional code, while its extensive context window facilitates the analysis of large codebases and complex technical documentation. The model's training stability at the trillion-parameter scale is achieved through the MuonClip optimizer, which mitigates common loss spikes associated with sparse architectures.
Moonshot AI's Kimi K2 is a Mixture-of-Experts model featuring one trillion total parameters, activating 32 billion per token. Designed for agentic intelligence, it utilizes a sparse architecture with 384 experts and the MuonClip optimizer for training stability, supporting a 128K token context window.
Rank
#20
| Benchmark | Score | Rank |
|---|---|---|
Coding LiveBench Coding | 0.78 | 8 |
Mathematics LiveBench Mathematics | 0.85 | 12 |
Web Development WebDev Arena | 1438 | ⭐ 13 |
StackUnseen ProLLM Stack Unseen | 0.65 | 17 |
Data Analysis LiveBench Data Analysis | 0.61 | 18 |
Reasoning LiveBench Reasoning | 0.76 | 20 |
Agentic Coding LiveBench Agentic | 0.48 | 20 |
Overall Rank
#20
Coding Rank
#19
Full Calculator
Choose the quantization method for model weights
Context Size: 1,024 tokens