Active Parameters
117B
Context Length
128K
Modality
Text
Architecture
Mixture of Experts (MoE)
License
Apache 2.0
Release Date
5 Aug 2025
Knowledge Cutoff
Jun 2024
Total Expert Parameters
5.1B
Number of Experts
128
Active Experts
4
Attention Structure
Multi-Head Attention
Hidden Dimension Size
2880
Number of Layers
36
Attention Heads
-
Key-Value Heads
-
Activation Function
SwigLU
Normalization
RMS Normalization
Position Embedding
Absolute Position Embedding
VRAM requirements for different quantization methods and context sizes
GPT-OSS 120B is a large open-weight model from OpenAI, designed to operate in data centers and on high-end desktops and laptops. It is developed to support advanced reasoning, agentic tasks, and diverse developer use cases, functioning as a text-only model for both input and output modalities.
Rank
#86
| Benchmark | Score | Rank |
|---|---|---|
Summarization ProLLM Summarization | 0.98 | 🥉 3 |
General Knowledge MMLU | 0.90 | 🥉 3 |
StackUnseen ProLLM Stack Unseen | 0.93 | 4 |
Professional Knowledge MMLU Pro | 0.81 | 20 |
Coding Aider Coding | 0.42 | 28 |
Mathematics LiveBench Mathematics | 0.69 | 30 |
Agentic Coding LiveBench Agentic | 0.17 | 40 |
Graduate-Level QA GPQA | 0.81 | 44 |
Web Development WebDev Arena | 1092.96 | 45 |
Coding LiveBench Coding | 0.60 | 49 |
Reasoning LiveBench Reasoning | 0.39 | 50 |
Data Analysis LiveBench Data Analysis | 0.57 | 50 |
Overall Rank
#86
Coding Rank
#79
Full Calculator
Choose the quantization method for model weights
Context Size: 1,024 tokens