ApX logoApX logo

Ministral 3 14B

Parameters

14B

Context Length

256K

Modality

Multimodal

Architecture

Dense

License

Apache 2.0

Release Date

2 Dec 2025

Knowledge Cutoff

Jun 2025

Technical Specifications

Attention Structure

Multi-Head Attention

Hidden Dimension Size

5120

Number of Layers

40

Attention Heads

32

Key-Value Heads

8

Activation Function

SwigLU

Normalization

RMS Normalization

Position Embedding

Absolute Position Embedding

Ministral 3 14B

Ministral 3 14B is a high-density, multimodal transformer model engineered by Mistral AI to bridge the gap between edge-efficient computing and frontier-class intelligence. As the largest member of the Ministral 3 family, it employs a sophisticated Cascade Distillation strategy, where knowledge is progressively transferred from larger parent models, such as Mistral Small 3.1, into a more compact 14-billion-parameter footprint. This architecture integrates a 13.5-billion-parameter decoder-only language core with a frozen 410-million-parameter Vision Transformer (ViT) encoder, enabling the model to process interleaved image and text inputs with high precision.

The technical foundation of the model features 40 transformer layers and a hidden dimension of 5120, utilizing Grouped Query Attention (GQA) with 32 query heads and 8 key-value heads to optimize memory throughput during inference. It incorporates modern architectural best practices, including RMSNorm for stable normalization, SwiGLU activation functions for enhanced non-linear processing, and Rotary Positional Embeddings (RoPE) enhanced by YaRN scaling. These components collectively support an expansive context window of 256,000 tokens, allowing for the ingestion of massive document sets or complex multi-turn agentic workflows without performance degradation.

Designed for sophisticated automation and private AI deployments, Ministral 3 14B excels in agentic tasks through native support for function calling and structured JSON outputs. Its training emphasizes efficiency and versatility, providing robust multilingual capabilities across more than 40 languages and high-tier performance in reasoning-heavy domains like mathematics and coding. By balancing a dense architectural structure with advanced quantization compatibility, the model is optimized for deployment on local workstations and enterprise edge hardware, offering a high-performance alternative to much larger cloud-based systems.

About Ministral 3

Ministral 3 is a family of efficient edge models with vision capabilities, available in 3B, 8B, and 14B parameter sizes. Designed for edge deployment with multimodal and multilingual support, offering best-in-class performance for resource-constrained environments.


Other Ministral 3 Models

Evaluation Benchmarks

No evaluation benchmarks for Ministral 3 14B available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
125k
250k

VRAM Required:

Recommended GPUs

Ministral 3 14B: Specifications and GPU VRAM Requirements