Parameters
300M
Context Length
131.072K
Modality
Text
Architecture
Dense
License
Apache License 2.0
Release Date
30 Jun 2025
Knowledge Cutoff
-
Attention Structure
Multi-Head Attention
Hidden Dimension Size
1024
Number of Layers
18
Attention Heads
16
Key-Value Heads
2
Activation Function
Swish
Normalization
RMS Normalization
Position Embedding
Absolute Position Embedding
VRAM requirements for different quantization methods and context sizes
The ERNIE-4.5-0.3B-Base model is a constituent of Baidu's ERNIE 4.5 family of foundation models, explicitly engineered for general-purpose text understanding and generation tasks. This variant is characterized by its compact size, featuring 360 million parameters, and a dense architectural design, rendering it suitable for deployment in environments with limited computational resources or for applications requiring a lightweight inference footprint. As an open-source offering under the Apache License 2.0, it provides a foundational language model for developers and researchers to build upon and integrate into various text-centric systems.
From an architectural standpoint, ERNIE-4.5-0.3B-Base implements a transformer structure comprising 18 layers. It utilizes 16 attention heads for queries and 2 key-value heads, indicating a Grouped-Query Attention (GQA) mechanism for efficient processing. The model is trained to support a substantial context length of up to 131,072 tokens, enabling it to process and generate coherent text over extended sequences. Unlike some other variants within the ERNIE 4.5 series, this model employs a dense architecture rather than a Mixture-of-Experts (MoE) structure. The hidden dimension size is 1024, and it employs RMS Normalization and the Swish (SiLU) activation function. The model utilizes an absolute position embedding.
This model is primarily optimized for text completion and can be fine-tuned for specialized applications through various methods, including Supervised Fine-tuning (SFT), Low-Rank Adaptation (LoRA), and Direct Preference Optimization (DPO). Its compatibility with widely adopted frameworks such as Hugging Face Transformers and Baidu's FastDeploy toolkit facilitates its integration into existing development workflows. The model is designed to support both English and Chinese languages.
The Baidu ERNIE 4.5 family consists of ten large-scale multimodal models. They utilize a heterogeneous Mixture-of-Experts (MoE) architecture, which enables parameter sharing across modalities while also employing dedicated parameters for specific modalities, supporting efficient language and multimodal processing.
Ranking is for Local LLMs.
No evaluation benchmarks for ERNIE-4.5-0.3B-Base available.
Overall Rank
-
Coding Rank
-
Full Calculator
Choose the quantization method for model weights
Context Size: 1,024 tokens