ApX 标志

趋近智

Llama 4 Maverick

活跃参数

400B

上下文长度

1,000K

模态

Multimodal

架构

Mixture of Experts (MoE)

许可证

Llama 4 Community License Agreement

发布日期

5 Apr 2025

知识截止

Aug 2024

技术规格

专家参数总数

17.0B

专家数量

128

活跃专家

2

注意力结构

Grouped-Query Attention

隐藏维度大小

12288

层数

120

注意力头

96

键值头

8

激活函数

-

归一化

RMS Normalization

位置嵌入

Irope

系统要求

不同量化方法和上下文大小的显存要求

Llama 4 Maverick

The Llama 4 Maverick model is a natively multimodal large language model developed by Meta, released as part of the Llama 4 model family. Its primary purpose is to deliver advanced capabilities in text and image understanding, supporting a wide range of applications including assistant-like conversational AI, creative content generation, complex reasoning, and code generation. Designed for both commercial and research deployment, Llama 4 Maverick aims to provide high-quality performance with improved cost efficiency.

From an architectural perspective, Llama 4 Maverick leverages a Mixture-of-Experts (MoE) design, a significant departure from previous dense transformer models. It comprises 400 billion total parameters, with only 17 billion parameters actively engaged per token during inference. This efficiency is achieved through the use of 128 experts, where processing involves alternating dense and MoE layers. The model integrates different modalities, such as text and images, through an early fusion mechanism, allowing for comprehensive multimodal processing from the initial stages. The internal architecture also incorporates iRoPE for managing and scaling context, further enhancing its capabilities.

Llama 4 Maverick demonstrates robust performance across diverse benchmarks, including coding, reasoning, and multilingual tasks, as well as long-context processing and image understanding. It is engineered for high model throughput and is suitable for production environments that demand low latency and precision. The model's design facilitates its deployment in scenarios requiring sophisticated multimodal interaction and efficient resource utilization, addressing modern AI application requirements.

关于 Llama 4

Meta's Llama 4 model family implements a Mixture-of-Experts (MoE) architecture for efficient scaling. It features native multimodality through early fusion of text, images, and video. This iteration also supports significantly extended context lengths, with models capable of processing up to 10 million tokens.


其他 Llama 4 模型

评估基准

排名适用于本地LLM。

排名

#15

基准分数排名

0.92

4

0.95

4

Graduate-Level QA

GPQA

0.70

4

Professional Knowledge

MMLU Pro

0.81

5

0.32

9

General Knowledge

MMLU

0.70

9

0.72

10

0.61

14

排名

排名

#15

编程排名

#25

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
488k
977k

所需显存:

推荐 GPU

Llama 4 Maverick: Specifications and GPU VRAM Requirements