ApX logo

Llama 3 8B

Parameters

8B

Context Length

8.192K

Modality

Text

Architecture

Dense

License

Meta Llama 3 Community License Agreement

Release Date

18 Apr 2024

Knowledge Cutoff

Mar 2023

Technical Specifications

Attention Structure

Grouped-Query Attention

Hidden Dimension Size

4096

Number of Layers

32

Attention Heads

32

Key-Value Heads

8

Activation Function

SwigLU

Normalization

RMS Normalization

Position Embedding

ROPE

System Requirements

VRAM requirements for different quantization methods and context sizes

Llama 3 8B

Meta Llama 3 is a foundational large language model developed by Meta AI, designed to facilitate advanced text and code generation across a diverse range of applications. It is made available in multiple parameter scales, including an 8 billion parameter variant, and is provided in both pre-trained and instruction-tuned forms. The architecture is engineered for scalability and responsible deployment in artificial intelligence systems, supporting various use cases from assistant-style conversational agents to complex natural language processing research tasks.

The model employs a decoder-only transformer architecture, incorporating several technical enhancements over its predecessors. Key innovations include an optimized tokenizer with a 128,000-token vocabulary, which contributes to increased encoding efficiency for language. Additionally, the model integrates Grouped-Query Attention (GQA) across both its 8 billion and 70 billion parameter versions, a modification aimed at improving inference efficiency. For enhanced training stability, Llama 3 utilizes Root Mean Square Normalization (RMSNorm) applied as pre-normalization and employs the SwiGLU activation function. Positional encodings within the model are handled through Rotary Positional Embeddings (RoPE).

Llama 3 8B has been pre-trained on a vast corpus exceeding 15 trillion tokens sourced from publicly available datasets, representing a substantial increase in training data volume compared to prior Llama iterations. This model supports a context length of 8,192 tokens. It demonstrates capabilities in generating coherent text, assisting with code completion, and engaging in conversational tasks, and its capabilities extend to multiple languages and tool use in later iterations (Llama 3.1).

About Llama 3

Meta's Llama 3 is a series of large language models utilizing a decoder-only transformer architecture. It incorporates a 128K token vocabulary and Grouped Query Attention for efficient processing. Models are trained on substantial public datasets, supporting various parameter scales and extended context lengths.


Other Llama 3 Models

Evaluation Benchmarks

Ranking is for Local LLMs.

No evaluation benchmarks for Llama 3 8B available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
4k
8k

VRAM Required:

Recommended GPUs

Llama 3 8B: Specifications and GPU VRAM Requirements