ApX logoApX logo

Ministral 3 3B

Parameters

3B

Context Length

256K

Modality

Multimodal

Architecture

Dense

License

Apache 2.0

Release Date

2 Dec 2025

Knowledge Cutoff

-

Technical Specifications

Attention Structure

Multi-Head Attention

Hidden Dimension Size

3072

Number of Layers

26

Attention Heads

32

Key-Value Heads

8

Activation Function

SwigLU

Normalization

Layer Normalization

Position Embedding

Absolute Position Embedding

Ministral 3 3B

Ministral 3 3B is a compact, multimodal language model engineered by Mistral AI for efficient execution in edge computing environments and resource-constrained scenarios. The model architecture integrates a 3.4 billion parameter language decoder with a 410 million parameter Vision Transformer (ViT) encoder, yielding a combined capacity of approximately 3.8 billion parameters. This hybrid design enables the simultaneous processing of text and visual inputs, facilitating advanced tasks such as image captioning, visual question answering, and multimodal data extraction while maintaining a low computational overhead.

Technically, Ministral 3 3B follows a dense Transformer-based decoder-only architecture that leverages Grouped Query Attention (GQA) with 32 query heads and 8 key-value heads to optimize memory bandwidth and inference speed. It employs Rotary Positional Embeddings (RoPE) enhanced with YaRN (Yet another RoPE extensioN) and position-based softmax temperature scaling to support an extensive context window of up to 256,000 tokens. To further enhance efficiency at this scale, the 3B variant utilizes tied input-output embeddings, preventing vocabulary parameters from disproportionately increasing the total model size. The vision component utilizes a frozen ViT encoder derived from the Mistral Small 3.1 architecture, coupled with a newly trained multimodal projection layer.

The model is optimized for high-performance on-device applications, offering native support for function calling and structured JSON output to enable complex agentic workflows. It incorporates architectural refinements such as SwiGLU activation and RMSNorm to ensure stability and efficiency during local inference. By supporting dozens of languages and featuring a high-context capacity, Ministral 3 3B is positioned as a versatile solution for real-time translation, local content generation, and privacy-focused intelligent assistants operating directly on user hardware.

About Ministral 3

Ministral 3 is a family of efficient edge models with vision capabilities, available in 3B, 8B, and 14B parameter sizes. Designed for edge deployment with multimodal and multilingual support, offering best-in-class performance for resource-constrained environments.


Other Ministral 3 Models

Evaluation Benchmarks

No evaluation benchmarks for Ministral 3 3B available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
125k
250k

VRAM Required:

Recommended GPUs

Ministral 3 3B: Specifications and GPU VRAM Requirements