趋近智
参数
70B
上下文长度
128K
模态
Text
架构
Dense
许可证
Llama 3.1 Community License Agreement
发布日期
23 Jul 2024
知识截止
Dec 2023
注意力结构
Grouped-Query Attention
隐藏维度大小
8192
层数
80
注意力头
64
键值头
8
激活函数
-
归一化
-
位置嵌入
ROPE
不同量化方法和上下文大小的显存要求
Llama 3.1 70B is a large language model developed by Meta, designed to address a wide array of natural language processing tasks. This model variant builds upon its predecessors by offering enhanced capabilities across various applications. Its primary purpose includes facilitating content generation, powering conversational AI systems, performing sentiment analysis, and supporting code generation. The model is structured to be suitable for deployment in both research and enterprise environments, providing a robust foundation for diverse AI-native applications.
Architecturally, Llama 3.1 70B employs an optimized dense Transformer network. A significant technical advancement in this iteration is the expansion of its context length to 128,000 tokens, representing a substantial increase over previous Llama 3 models. This enables the model to process and generate coherent responses from extensive textual inputs, supporting advanced use cases requiring long-form context understanding. Furthermore, Llama 3.1 70B incorporates enhanced multilingual capabilities, enabling it to operate effectively in several languages beyond English, including German, French, Italian, Portuguese, Hindi, Spanish, and Thai. The model's training incorporates advanced techniques such as supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF), which contribute to its capacity for instruction following and contextual relevance.
In terms of performance characteristics and use cases, Llama 3.1 70B is engineered for high performance in large-scale AI applications. Its expanded context window and multilingual support make it suitable for tasks such as comprehensive text summarization, development of sophisticated multilingual conversational agents, and creation of coding assistants. The model supports a variety of common natural language generation tasks, making it a versatile tool for developers and organizations aiming to integrate cutting-edge AI technology into their workflows.
Llama 3.1 is Meta's advanced large language model family, building upon Llama 3. It features an optimized decoder-only transformer architecture, available in 8B, 70B, and 405B parameter versions. Significant enhancements include an expanded 128K token context window and improved multilingual capabilities across eight languages, refined through data and post-training procedures.
排名适用于本地LLM。
排名
#31
基准 | 分数 | 排名 |
---|---|---|
StackEval ProLLM Stack Eval | 0.95 | 🥉 3 |
General Knowledge MMLU | 0.80 | ⭐ 5 |
Refactoring Aider Refactoring | 0.59 | 7 |
QA Assistant ProLLM QA Assistant | 0.92 | 8 |
Coding Aider Coding | 0.59 | 11 |
Summarization ProLLM Summarization | 0.6 | 13 |
Professional Knowledge MMLU Pro | 0.66 | 15 |
Data Analysis LiveBench Data Analysis | 0.54 | 17 |
Graduate-Level QA GPQA | 0.42 | 23 |
Reasoning LiveBench Reasoning | 0.30 | 24 |
Mathematics LiveBench Mathematics | 0.33 | 27 |
Coding LiveBench Coding | 0.20 | 28 |