趋近智
参数
-
上下文长度
400K
模态
Text
架构
Dense
许可证
Proprietary
发布日期
13 Nov 2025
训练数据截止日期
Sep 2024
注意力结构
Multi-Head Attention
隐藏维度大小
-
层数
-
注意力头
-
键值头
-
激活函数
-
归一化
-
位置嵌入
Absolute Position Embedding
GPT-5.1 Codex Max High is a specialized variant of the GPT-5.1 family, engineered specifically for high-capacity software development and autonomous engineering workflows. This model is constructed on an advanced reasoning stack and is optimized for long-horizon, agentic tasks such as project-scale refactoring, multi-step debugging, and vulnerability detection. It features a native capacity for multi-context window processing through a mechanism termed compaction, which allows the model to maintain state and coherence over extended development sessions that can span hundreds of thousands of tokens.
Technically, the model utilizes a dense architecture with multi-head attention (MHA) and absolute position embeddings. Unlike general-purpose variants, this Codex iteration is specifically pre-trained and fine-tuned on diverse software engineering datasets, mathematics, and technical research papers. It is the first in its series to include native training for operating within Windows environments, facilitating more direct integration with desktop-based IDEs and command-line interfaces. The architecture supports adjustable reasoning effort levels, enabling developers to prioritize between rapid code generation and deep architectural analysis.
In practical application, GPT-5.1 Codex Max High serves as a primary engine for AI-integrated development environments and automated code review pipelines. It is designed to function as an autonomous agent capable of persisting through complex tasks for several hours, iteratively fixing test failures and refining implementations. Its high context window of 400,000 tokens ensures that entire microservices or large modules can be analyzed within a single session, reducing the need for manual context slicing and improving the accuracy of cross-file dependency resolution.
OpenAI's latest generation of language models featuring advanced reasoning capabilities, extended context windows up to 400K tokens, and specialized variants for coding, general intelligence, and efficiency. GPT-5 series introduces improved thinking modes, superior performance across benchmarks, and variants optimized for different use cases from high-capacity Pro models to efficient Nano models. Features native multimodal understanding, enhanced mathematical reasoning, and state-of-the-art coding abilities through Codex variants.
排名
#1
| 基准 | 分数 | 排名 |
|---|---|---|
Coding LiveBench Coding | 0.81 | 🥈 2 |
Data Analysis LiveBench Data Analysis | 0.73 | 10 |