ApX 标志

趋近智

ERNIE-4.5-VL-424B-A47B-Base

活跃参数

424B

上下文长度

131.072K

模态

Multimodal

架构

Mixture of Experts (MoE)

许可证

Apache 2.0

发布日期

30 Jun 2025

知识截止

Jun 2025

技术规格

专家参数总数

47.0B

专家数量

128

活跃专家

16

注意力结构

Grouped-Query Attention

隐藏维度大小

-

层数

54

注意力头

64

键值头

8

激活函数

-

归一化

-

位置嵌入

Absolute Position Embedding

系统要求

不同量化方法和上下文大小的显存要求

ERNIE-4.5-VL-424B-A47B-Base

ERNIE-4.5-VL-424B-A47B-Base is a significant advancement in large-scale multimodal artificial intelligence, developed by Baidu. This variant, part of the broader ERNIE 4.5 family, functions as a Mixture-of-Experts (MoE) model engineered for comprehensive multimodal understanding and generation, integrating both text and vision capabilities. It is designed to support sophisticated applications requiring deep comprehension of textual and visual information, encompassing tasks such as content analysis, cross-modal reasoning, and multimodal conversation. The model also supports both thinking and non-thinking inference modes, providing flexibility for various real-world applications.

At its architectural core, ERNIE-4.5-VL-424B-A47B-Base employs a heterogeneous Mixture-of-Experts (MoE) structure, featuring 424 billion total parameters with 47 billion parameters actively engaged per token. The model is built with 54 layers. Its self-attention mechanisms utilize 64 query heads and 8 key-value heads, indicating a Grouped-Query Attention (GQA) structure. A key innovation lies in its multimodal heterogeneous MoE pre-training, where text and visual modalities are jointly processed. This design incorporates modality-isolated routing, router orthogonal loss, and multimodal token-balanced loss to ensure that neither modality compromises the learning of the other, thereby enabling effective representation and mutual reinforcement across different data types. The multimodal stage extends capabilities to images and videos by introducing additional parameters, including a Vision Transformer (ViT) for image feature extraction, an adapter for feature transformation, and visual experts for multimodal understanding.

For enhanced performance and deployment efficiency, the model is trained using the PaddlePaddle deep learning framework, leveraging a scaling-efficient infrastructure that includes heterogeneous hybrid parallelism, hierarchical load balancing, and FP8 mixed-precision training. Inference is optimized through a multi-expert parallel collaboration method and a convolutional code quantization algorithm, achieving 4-bit/2-bit near-lossless quantization. This allows for deployment even with constrained computational resources, specifically enabling the largest ERNIE 4.5 model to be deployed with four 80GB GPUs for 4-bit quantization or one 141GB GPU for 2-bit quantization. The model supports an extended context length of up to 131,072 tokens, which is beneficial for tasks involving long-form content generation and complex reasoning over extensive documents or protracted conversations.

关于 ERNIE 4.5

The Baidu ERNIE 4.5 family consists of ten large-scale multimodal models. They utilize a heterogeneous Mixture-of-Experts (MoE) architecture, which enables parameter sharing across modalities while also employing dedicated parameters for specific modalities, supporting efficient language and multimodal processing.


其他 ERNIE 4.5 模型

评估基准

排名适用于本地LLM。

没有可用的 ERNIE-4.5-VL-424B-A47B-Base 评估基准。

排名

排名

-

编程排名

-

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
64k
128k

所需显存:

推荐 GPU