ApX logo

OLMo 3 32B Base

Parameters

32B

Context Length

65.536K

Modality

Text

Architecture

Dense

License

Apache 2.0

Release Date

25 Nov 2025

Knowledge Cutoff

Dec 2024

Technical Specifications

Attention Structure

Multi-Head Attention

Hidden Dimension Size

5120

Number of Layers

64

Attention Heads

40

Key-Value Heads

8

Activation Function

SwigLU

Normalization

RMS Normalization

Position Embedding

Absolute Position Embedding

System Requirements

VRAM requirements for different quantization methods and context sizes

OLMo 3 32B Base

The OLMo 3 32B Base model, developed by the Allen Institute for AI (Ai2), is a foundational large language model designed to advance transparency and reproducibility in AI research. This variant, with 32 billion parameters, serves as the base for more specialized models within the OLMo 3 family, including Instruct and Think variants. Its primary purpose is to provide a robust, openly accessible, and auditable platform for further pretraining, fine-tuning, and experimentation in language model development. The model's complete lifecycle, encompassing training data, code, checkpoints, logs, and evaluation methodologies, is made publicly available to foster a deeper understanding of model behavior and facilitate scientific inquiry.

Architecturally, OLMo 3 32B Base is a dense, decoder-only transformer. It is configured with 64 layers and a hidden dimension size of 5120. The attention mechanism incorporates grouped-query attention (GQA), featuring 40 attention heads and 8 key-value heads, which contributes to efficient KV cache management. The model also employs a hybrid attention pattern, utilizing sliding-window attention across most layers and full-sequence attention in every fourth layer to balance local and global context processing. Rotary position embeddings (RoPE) with YaRN-style scaling extend the model's effective context length to 65,536 tokens. Normalization is implemented using RMSNorm, and the activation function within the MLP blocks is of a GeGLU/SwiGLU style, which enhances parameter efficiency. The training process leverages Flash Attention for computational efficiency.

Pretrained on approximately 5.9 trillion tokens from the Dolma 3 dataset, OLMo 3 32B Base undergoes a staged training regimen that includes general pretraining, mid-training on targeted data, and a context extension phase. This methodical approach establishes a strong foundation for its capabilities in areas such as programming, reading comprehension, and mathematical problem-solving. The model maintains its performance across extended context lengths, providing a versatile base for developing specialized downstream applications. The comprehensive openness of its development artifacts allows researchers and developers to inspect, audit, and extend the model, supporting diverse applications from continued pretraining to targeted fine-tuning and reinforcement learning setups.

About OLMo 3

OLMo (Open Language Model) is a series of fully open language models designed to enable the science of language models. Released by the Allen Institute for AI (Ai2), OLMo 3 provides complete access to training data (Dolma 3), code, checkpoints, logs, and evaluation methodologies. The family includes Base models for pretraining research, Instruct variants for chat and tool use, and Think variants with chain-of-thought reasoning capabilities. All models are trained with staged approach including pretraining, mid-training, and long-context phases.


Other OLMo 3 Models

Evaluation Benchmarks

Ranking is for Local LLMs.

No evaluation benchmarks for OLMo 3 32B Base available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
32k
64k

VRAM Required:

Recommended GPUs