ApX logoApX logo

ERNIE-4.5-300B-A47B

Active Parameters

300B

Context Length

131.072K

Modality

Text

Architecture

Mixture of Experts (MoE)

License

Apache 2.0

Release Date

30 Jun 2025

Knowledge Cutoff

Mar 2025

Technical Specifications

Total Expert Parameters

47.0B

Number of Experts

64

Active Experts

8

Attention Structure

Grouped-Query Attention

Hidden Dimension Size

-

Number of Layers

54

Attention Heads

64

Key-Value Heads

8

Activation Function

-

Normalization

-

Position Embedding

Absolute Position Embedding

ERNIE-4.5-300B-A47B

ERNIE-4.5-300B-A47B is a large-scale Mixture-of-Experts (MoE) foundation model developed by Baidu as a core component of the ERNIE 4.5 family. While the broader series encompasses multimodal capabilities, this specific variant is a text-focused model optimized for advanced natural language understanding, complex reasoning, and high-performance text generation in both English and Chinese. It serves as a high-capacity solution for knowledge-intensive tasks, balancing the expansive knowledge base of a 300-billion parameter system with the computational efficiency of sparse activation.

The technical architecture employs a novel heterogeneous MoE structure that facilitates parameter sharing while utilizing modality-isolated routing to prevent cross-modal interference during pre-training. It features 54 Transformer layers and 64 total experts, with 8 active experts per token, resulting in 47 billion active parameters during inference. The model utilizes Grouped Query Attention (GQA) with 64 query heads and 8 key-value heads to optimize memory bandwidth and throughput. Training was conducted using the PaddlePaddle deep learning framework, incorporating intra-node expert parallelism, memory-efficient pipeline scheduling, and FP8 mixed-precision training to achieve high hardware utilization.

Operational efficiency is enhanced through support for near-lossless 4-bit and 2-bit quantization, enabling deployment on a variety of hardware configurations including single-card and multi-GPU setups. The model maintains a substantial context window of 131,072 tokens, allowing for the processing of long-form documents and maintaining coherence across extended dialogues. For post-training, the model undergoes Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and Unified Preference Optimization (UPO) to align outputs with user instructions and ensure robust performance in production environments.

About ERNIE 4.5

The Baidu ERNIE 4.5 family consists of ten large-scale multimodal models. They utilize a heterogeneous Mixture-of-Experts (MoE) architecture, which enables parameter sharing across modalities while also employing dedicated parameters for specific modalities, supporting efficient language and multimodal processing.


Other ERNIE 4.5 Models

Evaluation Benchmarks

No evaluation benchmarks for ERNIE-4.5-300B-A47B available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
64k
128k

VRAM Required:

Recommended GPUs