ApX 标志ApX 标志

趋近智

GPT-OSS 20B

活跃参数

21B

上下文长度

128K

模态

Text

架构

Mixture of Experts (MoE)

许可证

Apache 2.0

发布日期

5 Aug 2025

训练数据截止日期

Jun 2024

技术规格

专家参数总数

3.6B

专家数量

32

活跃专家

4

注意力结构

Multi-Head Attention

隐藏维度大小

2880

层数

24

注意力头

64

键值头

8

激活函数

SwigLU

归一化

RMS Normalization

位置嵌入

Absolute Position Embedding

GPT-OSS 20B

GPT-OSS 20B is a text-based language model developed by OpenAI, specifically engineered to deliver high-performance reasoning on consumer-grade hardware. As part of the GPT-OSS family, this model balances computational efficiency with complex task execution, utilizing a sparse architecture to maintain a low memory footprint. It is designed to function as a flexible component in local and enterprise environments, where data privacy and low-latency response times are critical requirements.

The model utilizes a Mixture-of-Experts (MoE) transformer architecture consisting of 24 layers. While the total parameter count is 21 billion, the system only activates 3.6 billion parameters per token during the forward pass. This sparsity is achieved through a routing mechanism that selects four active experts from a pool of 32 for each token. The architecture incorporates several modern optimizations, including SwiGLU activation functions, Root Mean Square (RMS) normalization, and Grouped-Query Attention (GQA) with eight key-value heads to optimize memory throughput. It also supports a native context window of 128,000 tokens using Rotary Positional Embeddings (RoPE).

Functionally, GPT-OSS 20B is optimized for agentic workflows and complex reasoning tasks. It supports features such as native tool use, function calling, and a configurable reasoning effort system that allows developers to adjust the model's processing depth based on the specific latency needs of the application. The model is trained using a specialized response format to facilitate consistent structured outputs and long-form chain-of-thought reasoning, making it suitable for scientific analysis, code generation, and specialized technical assistance on local devices.

关于 GPT-OSS

Open-weight language models from OpenAI.


其他 GPT-OSS 模型

评估基准

排名

#70

基准分数排名

0.86

6

General Knowledge

MMLU

0.85

11

Web Development

WebDev Arena

1317

38

排名

排名

#70

编程排名

#51

模型透明度

总分

B

67 / 100

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
63k
125k

所需显存:

推荐 GPU

GPT-OSS 20B:规格和 GPU 显存要求