ApX logo

MiniMax M2

Active Parameters

229B

Context Length

128K

Modality

Text

Architecture

Mixture of Experts (MoE)

License

MIT

Release Date

7 Nov 2025

Knowledge Cutoff

-

Technical Specifications

Total Expert Parameters

10.0B

Number of Experts

-

Active Experts

2

Attention Structure

Multi-Head Attention

Hidden Dimension Size

-

Number of Layers

-

Attention Heads

-

Key-Value Heads

-

Activation Function

-

Normalization

-

Position Embedding

Absolute Position Embedding

System Requirements

VRAM requirements for different quantization methods and context sizes

MiniMax M2

MiniMax M2 is a Mixture of Experts (MoE) model developed by MiniMax, engineered for high performance in coding and agentic tasks. The model is designed to deliver advanced capabilities while optimizing cost and inference speed, making it suitable for practical deployment. It supports end-to-end developer workflows, including multi-file edits, code-run-fix loops, and long-horizon toolchains.

Architecturally, MiniMax M2 employs a sparse MoE transformer design, comprising a total of 230 billion parameters. A key innovation lies in its efficient activation strategy, where only 10 billion parameters are actively utilized during inference for each token. This selective activation mechanism reduces computational demands significantly while maintaining a broad capacity for knowledge and reasoning. The model's architecture is also characterized as a "full attention model," implying the use of a Multi-Head Attention (MHA) mechanism. Furthermore, MiniMax M2 supports multimodal inputs, encompassing text, audio, images, and video, extending its applicability across diverse data types.

Purpose-built for AI agent workflows and coding tasks, MiniMax M2 provides native support for integrating external tools such as shell environments, web browsers, and Python interpreters. This enables the model to facilitate complex, multi-step processes and robust tool-calling sequences. The model's efficiency allows for flexible deployment across various inference frameworks. Its design supports fast feedback loops, a critical attribute for environments like integrated development environments (IDEs) and continuous integration (CI) pipelines. An important operational aspect is the model's ability to maintain reasoning traces between turns, which is integral for consistent agent performance and improved auditability of its decision-making processes.

About MiniMax M2

MiniMax's efficient MoE models built for coding and agentic workflows.


Other MiniMax M2 Models
  • No related models available

Evaluation Benchmarks

Ranking is for Local LLMs.

No evaluation benchmarks for MiniMax M2 available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
63k
125k

VRAM Required:

Recommended GPUs