ApX 标志

趋近智

Llama 3.1 70B

参数

70B

上下文长度

128K

模态

Text

架构

Dense

许可证

Llama 3.1 Community License Agreement

发布日期

23 Jul 2024

知识截止

Dec 2023

技术规格

注意力结构

Grouped-Query Attention

隐藏维度大小

8192

层数

80

注意力头

64

键值头

8

激活函数

-

归一化

-

位置嵌入

ROPE

系统要求

不同量化方法和上下文大小的显存要求

Llama 3.1 70B

Llama 3.1 70B is a large language model developed by Meta, designed to address a wide array of natural language processing tasks. This model variant builds upon its predecessors by offering enhanced capabilities across various applications. Its primary purpose includes facilitating content generation, powering conversational AI systems, performing sentiment analysis, and supporting code generation. The model is structured to be suitable for deployment in both research and enterprise environments, providing a robust foundation for diverse AI-native applications.

Architecturally, Llama 3.1 70B employs an optimized dense Transformer network. A significant technical advancement in this iteration is the expansion of its context length to 128,000 tokens, representing a substantial increase over previous Llama 3 models. This enables the model to process and generate coherent responses from extensive textual inputs, supporting advanced use cases requiring long-form context understanding. Furthermore, Llama 3.1 70B incorporates enhanced multilingual capabilities, enabling it to operate effectively in several languages beyond English, including German, French, Italian, Portuguese, Hindi, Spanish, and Thai. The model's training incorporates advanced techniques such as supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF), which contribute to its capacity for instruction following and contextual relevance.

In terms of performance characteristics and use cases, Llama 3.1 70B is engineered for high performance in large-scale AI applications. Its expanded context window and multilingual support make it suitable for tasks such as comprehensive text summarization, development of sophisticated multilingual conversational agents, and creation of coding assistants. The model supports a variety of common natural language generation tasks, making it a versatile tool for developers and organizations aiming to integrate cutting-edge AI technology into their workflows.

关于 Llama 3.1

Llama 3.1 is Meta's advanced large language model family, building upon Llama 3. It features an optimized decoder-only transformer architecture, available in 8B, 70B, and 405B parameter versions. Significant enhancements include an expanded 128K token context window and improved multilingual capabilities across eight languages, refined through data and post-training procedures.


其他 Llama 3.1 模型

评估基准

排名适用于本地LLM。

排名

#31

基准分数排名

0.95

🥉

3

General Knowledge

MMLU

0.80

5

0.59

7

0.92

8

0.59

11

0.6

13

Professional Knowledge

MMLU Pro

0.66

15

0.54

17

Graduate-Level QA

GPQA

0.42

23

0.30

24

0.33

27

0.20

28

排名

排名

#31

编程排名

#24

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
63k
125k

所需显存:

推荐 GPU