Active Parameters
400B
Context Length
1,000K
Modality
Multimodal
Architecture
Mixture of Experts (MoE)
License
Llama 4 Community License Agreement
Release Date
5 Apr 2025
Knowledge Cutoff
Aug 2024
Total Expert Parameters
17.0B
Number of Experts
128
Active Experts
2
Attention Structure
Grouped-Query Attention
Hidden Dimension Size
12288
Number of Layers
120
Attention Heads
96
Key-Value Heads
8
Activation Function
-
Normalization
RMS Normalization
Position Embedding
Irope
VRAM requirements for different quantization methods and context sizes
The Llama 4 Maverick model is a natively multimodal large language model developed by Meta, released as part of the Llama 4 model family. Its primary purpose is to deliver advanced capabilities in text and image understanding, supporting a wide range of applications including assistant-like conversational AI, creative content generation, complex reasoning, and code generation. Designed for both commercial and research deployment, Llama 4 Maverick aims to provide high-quality performance with improved cost efficiency.
From an architectural perspective, Llama 4 Maverick leverages a Mixture-of-Experts (MoE) design, a significant departure from previous dense transformer models. It comprises 400 billion total parameters, with only 17 billion parameters actively engaged per token during inference. This efficiency is achieved through the use of 128 experts, where processing involves alternating dense and MoE layers. The model integrates different modalities, such as text and images, through an early fusion mechanism, allowing for comprehensive multimodal processing from the initial stages. The internal architecture also incorporates iRoPE for managing and scaling context, further enhancing its capabilities.
Llama 4 Maverick demonstrates robust performance across diverse benchmarks, including coding, reasoning, and multilingual tasks, as well as long-context processing and image understanding. It is engineered for high model throughput and is suitable for production environments that demand low latency and precision. The model's design facilitates its deployment in scenarios requiring sophisticated multimodal interaction and efficient resource utilization, addressing modern AI application requirements.
Meta's Llama 4 model family implements a Mixture-of-Experts (MoE) architecture for efficient scaling. It features native multimodality through early fusion of text, images, and video. This iteration also supports significantly extended context lengths, with models capable of processing up to 10 million tokens.
Ranking is for Local LLMs.
Rank
#15
Benchmark | Score | Rank |
---|---|---|
StackEval ProLLM Stack Eval | 0.92 | 4 |
QA Assistant ProLLM QA Assistant | 0.95 | 4 |
Graduate-Level QA GPQA | 0.70 | ⭐ 4 |
Professional Knowledge MMLU Pro | 0.81 | 5 |
StackUnseen ProLLM Stack Unseen | 0.32 | 9 |
General Knowledge MMLU | 0.70 | 9 |
Summarization ProLLM Summarization | 0.72 | 10 |
Mathematics LiveBench Mathematics | 0.61 | 14 |
Overall Rank
#15
Coding Rank
#25
Full Calculator
Choose the quantization method for model weights
Context Size: 1,024 tokens