ApX logo

ERNIE-4.5-VL-424B-A47B-Base

Active Parameters

424B

Context Length

131.072K

Modality

Multimodal

Architecture

Mixture of Experts (MoE)

License

Apache 2.0

Release Date

30 Jun 2025

Knowledge Cutoff

Jun 2025

Technical Specifications

Total Expert Parameters

47.0B

Number of Experts

128

Active Experts

16

Attention Structure

Grouped-Query Attention

Hidden Dimension Size

-

Number of Layers

54

Attention Heads

64

Key-Value Heads

8

Activation Function

-

Normalization

-

Position Embedding

Absolute Position Embedding

System Requirements

VRAM requirements for different quantization methods and context sizes

ERNIE-4.5-VL-424B-A47B-Base

ERNIE-4.5-VL-424B-A47B-Base is a significant advancement in large-scale multimodal artificial intelligence, developed by Baidu. This variant, part of the broader ERNIE 4.5 family, functions as a Mixture-of-Experts (MoE) model engineered for comprehensive multimodal understanding and generation, integrating both text and vision capabilities. It is designed to support sophisticated applications requiring deep comprehension of textual and visual information, encompassing tasks such as content analysis, cross-modal reasoning, and multimodal conversation. The model also supports both thinking and non-thinking inference modes, providing flexibility for various real-world applications.

At its architectural core, ERNIE-4.5-VL-424B-A47B-Base employs a heterogeneous Mixture-of-Experts (MoE) structure, featuring 424 billion total parameters with 47 billion parameters actively engaged per token. The model is built with 54 layers. Its self-attention mechanisms utilize 64 query heads and 8 key-value heads, indicating a Grouped-Query Attention (GQA) structure. A key innovation lies in its multimodal heterogeneous MoE pre-training, where text and visual modalities are jointly processed. This design incorporates modality-isolated routing, router orthogonal loss, and multimodal token-balanced loss to ensure that neither modality compromises the learning of the other, thereby enabling effective representation and mutual reinforcement across different data types. The multimodal stage extends capabilities to images and videos by introducing additional parameters, including a Vision Transformer (ViT) for image feature extraction, an adapter for feature transformation, and visual experts for multimodal understanding.

For enhanced performance and deployment efficiency, the model is trained using the PaddlePaddle deep learning framework, leveraging a scaling-efficient infrastructure that includes heterogeneous hybrid parallelism, hierarchical load balancing, and FP8 mixed-precision training. Inference is optimized through a multi-expert parallel collaboration method and a convolutional code quantization algorithm, achieving 4-bit/2-bit near-lossless quantization. This allows for deployment even with constrained computational resources, specifically enabling the largest ERNIE 4.5 model to be deployed with four 80GB GPUs for 4-bit quantization or one 141GB GPU for 2-bit quantization. The model supports an extended context length of up to 131,072 tokens, which is beneficial for tasks involving long-form content generation and complex reasoning over extensive documents or protracted conversations.

About ERNIE 4.5

The Baidu ERNIE 4.5 family consists of ten large-scale multimodal models. They utilize a heterogeneous Mixture-of-Experts (MoE) architecture, which enables parameter sharing across modalities while also employing dedicated parameters for specific modalities, supporting efficient language and multimodal processing.


Other ERNIE 4.5 Models

Evaluation Benchmarks

Ranking is for Local LLMs.

No evaluation benchmarks for ERNIE-4.5-VL-424B-A47B-Base available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
64k
128k

VRAM Required:

Recommended GPUs

ERNIE-4.5-VL-424B-A47B-Base: Specifications and GPU VRAM Requirements