ApX logo

ERNIE-4.5-0.3B-Base

Parameters

300M

Context Length

131.072K

Modality

Text

Architecture

Dense

License

Apache License 2.0

Release Date

30 Jun 2025

Knowledge Cutoff

-

Technical Specifications

Attention Structure

Multi-Head Attention

Hidden Dimension Size

1024

Number of Layers

18

Attention Heads

16

Key-Value Heads

2

Activation Function

Swish

Normalization

RMS Normalization

Position Embedding

Absolute Position Embedding

System Requirements

VRAM requirements for different quantization methods and context sizes

ERNIE-4.5-0.3B-Base

The ERNIE-4.5-0.3B-Base model is a constituent of Baidu's ERNIE 4.5 family of foundation models, explicitly engineered for general-purpose text understanding and generation tasks. This variant is characterized by its compact size, featuring 360 million parameters, and a dense architectural design, rendering it suitable for deployment in environments with limited computational resources or for applications requiring a lightweight inference footprint. As an open-source offering under the Apache License 2.0, it provides a foundational language model for developers and researchers to build upon and integrate into various text-centric systems.

From an architectural standpoint, ERNIE-4.5-0.3B-Base implements a transformer structure comprising 18 layers. It utilizes 16 attention heads for queries and 2 key-value heads, indicating a Grouped-Query Attention (GQA) mechanism for efficient processing. The model is trained to support a substantial context length of up to 131,072 tokens, enabling it to process and generate coherent text over extended sequences. Unlike some other variants within the ERNIE 4.5 series, this model employs a dense architecture rather than a Mixture-of-Experts (MoE) structure. The hidden dimension size is 1024, and it employs RMS Normalization and the Swish (SiLU) activation function. The model utilizes an absolute position embedding.

This model is primarily optimized for text completion and can be fine-tuned for specialized applications through various methods, including Supervised Fine-tuning (SFT), Low-Rank Adaptation (LoRA), and Direct Preference Optimization (DPO). Its compatibility with widely adopted frameworks such as Hugging Face Transformers and Baidu's FastDeploy toolkit facilitates its integration into existing development workflows. The model is designed to support both English and Chinese languages.

About ERNIE 4.5

The Baidu ERNIE 4.5 family consists of ten large-scale multimodal models. They utilize a heterogeneous Mixture-of-Experts (MoE) architecture, which enables parameter sharing across modalities while also employing dedicated parameters for specific modalities, supporting efficient language and multimodal processing.


Other ERNIE 4.5 Models

Evaluation Benchmarks

Ranking is for Local LLMs.

No evaluation benchmarks for ERNIE-4.5-0.3B-Base available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
64k
128k

VRAM Required:

Recommended GPUs

ERNIE-4.5-0.3B-Base: Specifications and GPU VRAM Requirements