Parameters
24B
Context Length
128K
Modality
Text
Architecture
Dense
License
Apache 2.0
Release Date
10 Jun 2025
Knowledge Cutoff
Oct 2023
Attention Structure
Multi-Head Attention
Hidden Dimension Size
14336
Number of Layers
32
Attention Heads
32
Key-Value Heads
8
Activation Function
SwigLU
Normalization
RMS Normalization
Position Embedding
Absolute Position Embedding
VRAM requirements for different quantization methods and context sizes
Magistral Small is an open-source reasoning model developed by Mistral AI, comprising 24 billion parameters. It is architecturally founded upon the Mistral Small 3.1 model and is specifically engineered to perform transparent, multi-step reasoning. This model provides traceable thought processes in the user's language, a feature designed to enhance interpretability and auditability for complex tasks. It supports multilingual reasoning across more than 24 languages, including widely used global languages such as English, French, German, Japanese, Korean, Chinese, Arabic, and Farsi.
From a technical perspective, Magistral Small employs a decoder-only transformer architecture with a hidden dimension size of 14,336 across its 32 layers. The model utilizes Grouped Query Attention (GQA) with 32 attention heads and 8 key-value heads, which contributes to optimized inference speed and reduced memory consumption compared to traditional Multi-Head Attention. Positional information is integrated using Rotary Positional Embeddings (RoPE), and the network's feedforward components incorporate SwiGLU activation functions in conjunction with RMS Normalization for stabilized training dynamics. The architecture also integrates FlashAttention for accelerated processing. While supporting a theoretical context window of 128,000 tokens, optimal performance is typically observed with contexts up to 40,000 tokens.
Magistral Small is proficient in multimodal comprehension, enabling it to process and reason over both textual and visual inputs. It is particularly suited for applications requiring structured calculations, programmatic logic, decision trees, and rule-based systems. The model's design facilitates its use in various scenarios, including fast-response conversational agents, systems for long document understanding, visual understanding applications, and specialized domain-specific fine-tuning. Its capabilities extend to supporting agentic AI workflows through native function calling and structured output generation.
Magistral is Mistral AI's first reasoning model series, purpose-built for transparent, step-by-step reasoning with native multilingual capabilities. Features chain-of-thought reasoning in the user's language with traceable thought processes. Excels in domain-specific problems requiring multi-step logic, from legal research and financial forecasting to software development and creative storytelling. Supports reasoning across numerous languages including English, French, Spanish, German, Italian, Arabic, Russian, and Chinese.
No evaluation benchmarks for Magistral Small available.
Overall Rank
-
Coding Rank
-
Full Calculator
Choose the quantization method for model weights
Context Size: 1,024 tokens