ApX logoApX logo

Kimi-VL-A3B-Thinking

Active Parameters

16B

Context Length

128K

Modality

Multimodal

Architecture

Mixture of Experts (MoE)

License

MIT License

Release Date

10 Apr 2025

Knowledge Cutoff

Oct 2024

Technical Specifications

Total Expert Parameters

3.0B

Number of Experts

64

Active Experts

2

Attention Structure

Multi-Head Attention

Hidden Dimension Size

2048

Number of Layers

27

Attention Heads

16

Key-Value Heads

16

Activation Function

SwigLU

Normalization

RMS Normalization

Position Embedding

Absolute Position Embedding

Kimi-VL-A3B-Thinking

Kimi-VL-A3B-Thinking is an advanced vision-language model (VLM) developed by Moonshot AI, engineered to bridge the gap between efficient parameter utilization and high-fidelity multimodal reasoning. Architecturally, it is built upon the Mixture-of-Experts (MoE) framework of the Moonlight LLM series, integrating a proprietary native-resolution visual encoder known as MoonViT via an MLP projector. The model is specifically optimized for long-horizon cognitive tasks through supervised fine-tuning and reinforcement learning, allowing it to generate extended chains of thought (CoT) when processing complex visual and textual inputs.

The system utilizes a sparse MoE design comprising 16 billion total parameters, with only approximately 2.8 billion parameters activated during any single inference step. The language decoder follows a configuration similar to the DeepSeek-V3 architecture, featuring Multi-head Latent Attention (MLA) and a specialized gating mechanism that routes tokens through 64 routed experts. This structural innovation enables the model to handle diverse input resolutions and aspect ratios without downsampling, preserving the fidelity of visual data for tasks such as optical character recognition (OCR) and college-level academic analysis.

Functionally, Kimi-VL-A3B-Thinking supports an expansive context window of 128,000 tokens, facilitating the ingestion of lengthy documents, multi-image sequences, and video content. The "Thinking" variant is tailored for scenarios requiring multi-step mathematical problem-solving, document comprehension, and autonomous agent interactions. By leveraging Flash-Attention 2 and supporting native half-precision formats, the model maintains high throughput and computational efficiency across a broad spectrum of multimodal reasoning applications.

About Kimi-VL

Kimi-VL by Moonshot AI is an efficient, open-source Mixture-of-Experts vision-language model. It employs a native-resolution MoonViT encoder and an MoE language model, activating 2.8 billion parameters. The model handles high-resolution visual inputs and processes contexts up to 128K tokens. A "Thinking" variant provides enhanced long-horizon reasoning.


Other Kimi-VL Models

Evaluation Benchmarks

No evaluation benchmarks for Kimi-VL-A3B-Thinking available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
63k
125k

VRAM Required:

Recommended GPUs

Kimi-VL-A3B-Thinking: Specifications and GPU VRAM Requirements