ApX 标志

趋近智

Llama 3 8B

参数

8B

上下文长度

8.192K

模态

Text

架构

Dense

许可证

Meta Llama 3 Community License Agreement

发布日期

18 Apr 2024

知识截止

Mar 2023

技术规格

注意力结构

Grouped-Query Attention

隐藏维度大小

4096

层数

32

注意力头

32

键值头

8

激活函数

SwigLU

归一化

RMS Normalization

位置嵌入

ROPE

系统要求

不同量化方法和上下文大小的显存要求

Llama 3 8B

Meta Llama 3 is a foundational large language model developed by Meta AI, designed to facilitate advanced text and code generation across a diverse range of applications. It is made available in multiple parameter scales, including an 8 billion parameter variant, and is provided in both pre-trained and instruction-tuned forms. The architecture is engineered for scalability and responsible deployment in artificial intelligence systems, supporting various use cases from assistant-style conversational agents to complex natural language processing research tasks.

The model employs a decoder-only transformer architecture, incorporating several technical enhancements over its predecessors. Key innovations include an optimized tokenizer with a 128,000-token vocabulary, which contributes to increased encoding efficiency for language. Additionally, the model integrates Grouped-Query Attention (GQA) across both its 8 billion and 70 billion parameter versions, a modification aimed at improving inference efficiency. For enhanced training stability, Llama 3 utilizes Root Mean Square Normalization (RMSNorm) applied as pre-normalization and employs the SwiGLU activation function. Positional encodings within the model are handled through Rotary Positional Embeddings (RoPE).

Llama 3 8B has been pre-trained on a vast corpus exceeding 15 trillion tokens sourced from publicly available datasets, representing a substantial increase in training data volume compared to prior Llama iterations. This model supports a context length of 8,192 tokens. It demonstrates capabilities in generating coherent text, assisting with code completion, and engaging in conversational tasks, and its capabilities extend to multiple languages and tool use in later iterations (Llama 3.1).

关于 Llama 3

Meta's Llama 3 is a series of large language models utilizing a decoder-only transformer architecture. It incorporates a 128K token vocabulary and Grouped Query Attention for efficient processing. Models are trained on substantial public datasets, supporting various parameter scales and extended context lengths.


其他 Llama 3 模型

评估基准

排名适用于本地LLM。

没有可用的 Llama 3 8B 评估基准。

排名

排名

-

编程排名

-

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
4k
8k

所需显存:

推荐 GPU