ApX 标志

趋近智

ERNIE-4.5-VL-28B-A3B-Base

活跃参数

28B

上下文长度

131.072K

模态

Multimodal

架构

Mixture of Experts (MoE)

许可证

Apache 2.0

发布日期

30 Jun 2025

知识截止

-

技术规格

专家参数总数

3.0B

专家数量

130

活跃专家

14

注意力结构

Grouped-Query Attention

隐藏维度大小

-

层数

28

注意力头

20

键值头

4

激活函数

-

归一化

-

位置嵌入

Absolute Position Embedding

系统要求

不同量化方法和上下文大小的显存要求

ERNIE-4.5-VL-28B-A3B-Base

The ERNIE-4.5-VL-28B-A3B-Base model is a constituent of Baidu's ERNIE 4.5 model family, engineered for advanced multimodal capabilities. This model variant is designed to process and synthesize information across diverse modalities, including text, images, audio, and video, facilitating robust understanding and generation in cross-modal scenarios. Its purpose extends to applications requiring comprehensive visual comprehension coupled with precise language expression, serving a broad spectrum of AI-driven tasks.

Architecturally, ERNIE-4.5-VL-28B-A3B-Base employs a Mixture-of-Experts (MoE) design, specifically a heterogeneous MoE structure. This innovative architecture incorporates modality-isolated routing, router orthogonal loss, and multimodal token-balanced loss mechanisms. Such design choices enable efficient cross-modal learning by supporting parameter sharing across different modalities while also allocating dedicated parameters for individual modalities. The model leverages "FlashMask" Dynamic Attention Masking for optimized information processing and is trained using the PaddlePaddle deep learning framework, supporting efficient inference and deployment across various hardware platforms.

The model's performance characteristics include support for both "thinking" and "non-thinking" modes within its vision-language capabilities. The "thinking" mode is intended to enhance reasoning abilities, while the "non-thinking" mode maintains strong perceptual capabilities for visual understanding, document processing, and visual knowledge tasks. This multimodal versatility makes the ERNIE-4.5-VL-28B-A3B-Base suitable for a range of applications demanding integrated visual and linguistic processing, such as content creation, document analysis, and sophisticated question-answering systems.

关于 ERNIE 4.5

The Baidu ERNIE 4.5 family consists of ten large-scale multimodal models. They utilize a heterogeneous Mixture-of-Experts (MoE) architecture, which enables parameter sharing across modalities while also employing dedicated parameters for specific modalities, supporting efficient language and multimodal processing.


其他 ERNIE 4.5 模型

评估基准

排名适用于本地LLM。

没有可用的 ERNIE-4.5-VL-28B-A3B-Base 评估基准。

排名

排名

-

编程排名

-

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
64k
128k

所需显存:

推荐 GPU