趋近智
活跃参数
400B
上下文长度
1,000K
模态
Multimodal
架构
Mixture of Experts (MoE)
许可证
Llama 4 Community License Agreement
发布日期
5 Apr 2025
知识截止
Aug 2024
专家参数总数
17.0B
专家数量
128
活跃专家
2
注意力结构
Grouped-Query Attention
隐藏维度大小
12288
层数
120
注意力头
96
键值头
8
激活函数
-
归一化
RMS Normalization
位置嵌入
Irope
不同量化方法和上下文大小的显存要求
The Llama 4 Maverick model is a natively multimodal large language model developed by Meta, released as part of the Llama 4 model family. Its primary purpose is to deliver advanced capabilities in text and image understanding, supporting a wide range of applications including assistant-like conversational AI, creative content generation, complex reasoning, and code generation. Designed for both commercial and research deployment, Llama 4 Maverick aims to provide high-quality performance with improved cost efficiency.
From an architectural perspective, Llama 4 Maverick leverages a Mixture-of-Experts (MoE) design, a significant departure from previous dense transformer models. It comprises 400 billion total parameters, with only 17 billion parameters actively engaged per token during inference. This efficiency is achieved through the use of 128 experts, where processing involves alternating dense and MoE layers. The model integrates different modalities, such as text and images, through an early fusion mechanism, allowing for comprehensive multimodal processing from the initial stages. The internal architecture also incorporates iRoPE for managing and scaling context, further enhancing its capabilities.
Llama 4 Maverick demonstrates robust performance across diverse benchmarks, including coding, reasoning, and multilingual tasks, as well as long-context processing and image understanding. It is engineered for high model throughput and is suitable for production environments that demand low latency and precision. The model's design facilitates its deployment in scenarios requiring sophisticated multimodal interaction and efficient resource utilization, addressing modern AI application requirements.
Meta's Llama 4 model family implements a Mixture-of-Experts (MoE) architecture for efficient scaling. It features native multimodality through early fusion of text, images, and video. This iteration also supports significantly extended context lengths, with models capable of processing up to 10 million tokens.
排名适用于本地LLM。
排名
#15
基准 | 分数 | 排名 |
---|---|---|
StackEval ProLLM Stack Eval | 0.92 | 4 |
QA Assistant ProLLM QA Assistant | 0.95 | 4 |
Graduate-Level QA GPQA | 0.70 | ⭐ 4 |
Professional Knowledge MMLU Pro | 0.81 | 5 |
StackUnseen ProLLM Stack Unseen | 0.32 | 9 |
General Knowledge MMLU | 0.70 | 9 |
Summarization ProLLM Summarization | 0.72 | 10 |
Mathematics LiveBench Mathematics | 0.61 | 14 |