ApX logo

ERNIE-4.5-VL-28B-A3B-Base

Active Parameters

28B

Context Length

131.072K

Modality

Multimodal

Architecture

Mixture of Experts (MoE)

License

Apache 2.0

Release Date

30 Jun 2025

Knowledge Cutoff

-

Technical Specifications

Total Expert Parameters

3.0B

Number of Experts

130

Active Experts

14

Attention Structure

Grouped-Query Attention

Hidden Dimension Size

-

Number of Layers

28

Attention Heads

20

Key-Value Heads

4

Activation Function

-

Normalization

-

Position Embedding

Absolute Position Embedding

System Requirements

VRAM requirements for different quantization methods and context sizes

ERNIE-4.5-VL-28B-A3B-Base

The ERNIE-4.5-VL-28B-A3B-Base model is a constituent of Baidu's ERNIE 4.5 model family, engineered for advanced multimodal capabilities. This model variant is designed to process and synthesize information across diverse modalities, including text, images, audio, and video, facilitating robust understanding and generation in cross-modal scenarios. Its purpose extends to applications requiring comprehensive visual comprehension coupled with precise language expression, serving a broad spectrum of AI-driven tasks.

Architecturally, ERNIE-4.5-VL-28B-A3B-Base employs a Mixture-of-Experts (MoE) design, specifically a heterogeneous MoE structure. This innovative architecture incorporates modality-isolated routing, router orthogonal loss, and multimodal token-balanced loss mechanisms. Such design choices enable efficient cross-modal learning by supporting parameter sharing across different modalities while also allocating dedicated parameters for individual modalities. The model leverages "FlashMask" Dynamic Attention Masking for optimized information processing and is trained using the PaddlePaddle deep learning framework, supporting efficient inference and deployment across various hardware platforms.

The model's performance characteristics include support for both "thinking" and "non-thinking" modes within its vision-language capabilities. The "thinking" mode is intended to enhance reasoning abilities, while the "non-thinking" mode maintains strong perceptual capabilities for visual understanding, document processing, and visual knowledge tasks. This multimodal versatility makes the ERNIE-4.5-VL-28B-A3B-Base suitable for a range of applications demanding integrated visual and linguistic processing, such as content creation, document analysis, and sophisticated question-answering systems.

About ERNIE 4.5

The Baidu ERNIE 4.5 family consists of ten large-scale multimodal models. They utilize a heterogeneous Mixture-of-Experts (MoE) architecture, which enables parameter sharing across modalities while also employing dedicated parameters for specific modalities, supporting efficient language and multimodal processing.


Other ERNIE 4.5 Models

Evaluation Benchmarks

Ranking is for Local LLMs.

No evaluation benchmarks for ERNIE-4.5-VL-28B-A3B-Base available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
64k
128k

VRAM Required:

Recommended GPUs