ApX logo

Kimi-VL-A3B-Instruct

Active Parameters

16B

Context Length

128K

Modality

Multimodal

Architecture

Mixture of Experts (MoE)

License

MIT

Release Date

10 Apr 2025

Knowledge Cutoff

-

Technical Specifications

Total Expert Parameters

3.0B

Number of Experts

384

Active Experts

8

Attention Structure

Multi-Head Attention

Hidden Dimension Size

-

Number of Layers

-

Attention Heads

-

Key-Value Heads

-

Activation Function

SwigLU

Normalization

RMS Normalization

Position Embedding

Absolute Position Embedding

System Requirements

VRAM requirements for different quantization methods and context sizes

Kimi-VL-A3B-Instruct

The Moonshot AI Kimi-VL-A3B-Instruct model is an efficient, open-source Mixture-of-Experts (MoE) vision-language model engineered for advanced multimodal reasoning and long-context understanding. This variant is specifically designed to comprehend both visual and textual inputs, serving as an instruction-tuned model optimized for conversational AI and interactive chat experiences. The model processes diverse inputs including single images, multiple images, videos, and long documents, enabling it to respond to complex natural language queries and instructions. It excels in tasks requiring general multimodal perception, optical character recognition (OCR), understanding of long videos and documents, and agent-based interactions. The model is particularly well-suited for applications such as document analysis, comprehensive video content understanding, and the development of interactive agent systems. Its design prioritizes efficient processing of high-resolution visual inputs coupled with extensive context understanding, making it applicable for scenarios demanding intricate visual and textual comprehension.

Architecturally, Kimi-VL-A3B-Instruct integrates an MoE language model, a native-resolution visual encoder termed MoonViT, and an MLP projector. The model comprises a total of 16 billion parameters, with its design allowing for the activation of approximately 2.8 billion parameters during inference, contributing to its computational efficiency. The underlying MoE language model, Moonlight, was pre-trained on a substantial 5.2 trillion tokens of pure text data and incorporates an 8K context length during this phase. This architecture enables flexible and efficient contextual routing of inputs through its expert sub-networks, with 8 experts selected from a total of 384 experts per token in the language decoder. The MoonViT encoder is designed to process images and videos at their native resolution, preserving visual fidelity for detailed analysis.

MoonViT supports processing high-resolution visual inputs up to 1792x1792 pixels, a fourfold increase compared to its initial release, enabling detailed analysis of screenshots and complex graphics. The model leverages a variable-length sequence attention mechanism, which is compatible with FlashAttention, to maintain efficient training throughput for images of varying resolutions. Kimi-VL-A3B-Instruct's design prioritizes efficient processing of high-resolution visual inputs coupled with extensive context understanding, making it applicable for scenarios demanding intricate visual and textual comprehension.

About Kimi-VL

Kimi-VL by Moonshot AI is an efficient, open-source Mixture-of-Experts vision-language model. It employs a native-resolution MoonViT encoder and an MoE language model, activating 2.8 billion parameters. The model handles high-resolution visual inputs and processes contexts up to 128K tokens. A "Thinking" variant provides enhanced long-horizon reasoning.


Other Kimi-VL Models

Evaluation Benchmarks

Ranking is for Local LLMs.

No evaluation benchmarks for Kimi-VL-A3B-Instruct available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
63k
125k

VRAM Required:

Recommended GPUs

Kimi-VL-A3B-Instruct: Specifications and GPU VRAM Requirements