ApX 标志

趋近智

OLMo 3 32B Base

参数

32B

上下文长度

65.536K

模态

Text

架构

Dense

许可证

Apache 2.0

发布日期

25 Nov 2025

训练数据截止日期

Dec 2024

技术规格

注意力结构

Multi-Head Attention

隐藏维度大小

5120

层数

64

注意力头

40

键值头

8

激活函数

SwigLU

归一化

RMS Normalization

位置嵌入

Absolute Position Embedding

系统要求

不同量化方法和上下文大小的显存要求

OLMo 3 32B Base

The OLMo 3 32B Base model, developed by the Allen Institute for AI (Ai2), is a foundational large language model designed to advance transparency and reproducibility in AI research. This variant, with 32 billion parameters, serves as the base for more specialized models within the OLMo 3 family, including Instruct and Think variants. Its primary purpose is to provide a robust, openly accessible, and auditable platform for further pretraining, fine-tuning, and experimentation in language model development. The model's complete lifecycle, encompassing training data, code, checkpoints, logs, and evaluation methodologies, is made publicly available to foster a deeper understanding of model behavior and facilitate scientific inquiry.

Architecturally, OLMo 3 32B Base is a dense, decoder-only transformer. It is configured with 64 layers and a hidden dimension size of 5120. The attention mechanism incorporates grouped-query attention (GQA), featuring 40 attention heads and 8 key-value heads, which contributes to efficient KV cache management. The model also employs a hybrid attention pattern, utilizing sliding-window attention across most layers and full-sequence attention in every fourth layer to balance local and global context processing. Rotary position embeddings (RoPE) with YaRN-style scaling extend the model's effective context length to 65,536 tokens. Normalization is implemented using RMSNorm, and the activation function within the MLP blocks is of a GeGLU/SwiGLU style, which enhances parameter efficiency. The training process leverages Flash Attention for computational efficiency.

Pretrained on approximately 5.9 trillion tokens from the Dolma 3 dataset, OLMo 3 32B Base undergoes a staged training regimen that includes general pretraining, mid-training on targeted data, and a context extension phase. This methodical approach establishes a strong foundation for its capabilities in areas such as programming, reading comprehension, and mathematical problem-solving. The model maintains its performance across extended context lengths, providing a versatile base for developing specialized downstream applications. The comprehensive openness of its development artifacts allows researchers and developers to inspect, audit, and extend the model, supporting diverse applications from continued pretraining to targeted fine-tuning and reinforcement learning setups.

关于 OLMo 3

OLMo (Open Language Model) is a series of fully open language models designed to enable the science of language models. Released by the Allen Institute for AI (Ai2), OLMo 3 provides complete access to training data (Dolma 3), code, checkpoints, logs, and evaluation methodologies. The family includes Base models for pretraining research, Instruct variants for chat and tool use, and Think variants with chain-of-thought reasoning capabilities. All models are trained with staged approach including pretraining, mid-training, and long-context phases.


其他 OLMo 3 模型

评估基准

排名适用于本地LLM。

没有可用的 OLMo 3 32B Base 评估基准。

排名

排名

-

编程排名

-

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
32k
64k

所需显存:

推荐 GPU