ApX 标志

趋近智

ERNIE-4.5-300B-A47B

活跃参数

300B

上下文长度

131.072K

模态

Text

架构

Mixture of Experts (MoE)

许可证

Apache 2.0

发布日期

30 Jun 2025

知识截止

-

技术规格

专家参数总数

47.0B

专家数量

64

活跃专家

8

注意力结构

Grouped-Query Attention

隐藏维度大小

-

层数

54

注意力头

64

键值头

8

激活函数

-

归一化

-

位置嵌入

Absolute Position Embedding

系统要求

不同量化方法和上下文大小的显存要求

ERNIE-4.5-300B-A47B

ERNIE-4.5-300B-A47B is a foundational language model within Baidu's ERNIE 4.5 family, designed to support advanced natural language processing tasks. While the broader ERNIE 4.5 series encompasses multimodal capabilities, this specific variant focuses on text-only applications, optimizing its architecture for efficient and robust language understanding and generation. Its primary purpose is to serve as a high-performance solution for general-purpose textual analysis and creation, including complex reasoning and knowledge-intensive tasks. The model supports text generation in both English and Chinese.

The model's technical foundation is a Mixture-of-Experts (MoE) architecture, featuring a total of 300 billion parameters with 47 billion parameters actively engaged per token during inference. The overarching ERNIE 4.5 MoE design includes a novel heterogeneous structure that facilitates parameter sharing while also allowing for dedicated parameters across different modalities, optimizing for multimodal understanding without compromising text-related performance. Key architectural enhancements include concepts like Dynamic Attention Masking (FlashMask), which contributes to efficient information processing, and modality-isolated routing. The model is trained using Baidu's PaddlePaddle deep learning framework, employing advanced techniques such as intra-node expert parallelism, memory-efficient pipeline scheduling, and FP8 mixed-precision training to achieve high throughput during pre-training.

For deployment and operational efficiency, ERNIE-4.5-300B-A47B supports highly efficient inference through methods like multi-expert parallel collaboration and convolutional code quantization, enabling near-lossless 4-bit and 2-bit quantization for diverse hardware configurations. It maintains a substantial context length of 131,072 tokens, allowing for the processing of extensive textual inputs and enabling coherent, long-form content generation. The model is also designed to be fine-tuned and deployed with developer toolkits like ERNIEKit and FastDeploy, making it accessible for a range of commercial and research applications under the Apache 2.0 license.

关于 ERNIE 4.5

The Baidu ERNIE 4.5 family consists of ten large-scale multimodal models. They utilize a heterogeneous Mixture-of-Experts (MoE) architecture, which enables parameter sharing across modalities while also employing dedicated parameters for specific modalities, supporting efficient language and multimodal processing.


其他 ERNIE 4.5 模型

评估基准

排名适用于本地LLM。

没有可用的 ERNIE-4.5-300B-A47B 评估基准。

排名

排名

-

编程排名

-

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
64k
128k

所需显存:

推荐 GPU