ApX logo

Llama 3.2 3B

Parameters

3B

Context Length

128K

Modality

Text

Architecture

Dense

License

Llama 3.2 Community License

Release Date

25 Sept 2024

Knowledge Cutoff

Dec 2023

Technical Specifications

Attention Structure

Grouped-Query Attention

Hidden Dimension Size

2048

Number of Layers

26

Attention Heads

24

Key-Value Heads

6

Activation Function

-

Normalization

-

Position Embedding

ROPE

System Requirements

VRAM requirements for different quantization methods and context sizes

Llama 3.2 3B

Llama 3.2 3B is a compact, instruction-tuned, and text-only generative language model developed by Meta. It is part of the Llama 3.2 model family, which also includes 1 billion parameter text models and larger multimodal variants. The model is specifically designed for efficient deployment in resource-constrained environments, such as edge and mobile devices. Its primary purpose is to facilitate scalable assistant and agentic language technologies by offering capabilities for tasks such as summarization, instruction following, rewriting, and knowledge retrieval. The model supports multilingual interactions, with official support for eight languages including English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.

The architectural foundation of Llama 3.2 3B is an auto-regressive transformer. Key innovations include the adoption of Grouped-Query Attention (GQA) to enhance inference scalability, a technique that improves throughput without a proportional increase in hardware demands. Training involved knowledge distillation from larger Llama variants, specifically Llama 3.1 8B and 70B models, where their output logits served as token-level targets during pre-training to recover performance after pruning. Post-training alignment, particularly for instruction-tuned versions, utilizes supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF). Furthermore, the model incorporates advanced quantization techniques, employing 4-bit groupwise quantization for transformer block weights and 8-bit per-token dynamic quantization for activations, optimizing its operation for environments like PyTorch's ExecuTorch framework.

Llama 3.2 3B is engineered for robust performance in on-device scenarios, balancing computational efficiency with output quality. It features an extended context window of 128,000 tokens, enabling processing of longer inputs for tasks such as document summarization and extended conversations. While the full precision models support this context length, quantized versions are typically configured for an 8,000-token context. The model's design prioritizes low-latency inferencing, making it suitable for applications that require rapid responses and operate with limited computational resources, such as mobile AI-powered writing assistants and customer service applications. The pre-trained variants also provide a foundational basis for further fine-tuning across various natural language generation tasks.

About Llama 3.2

Meta's Llama 3.2 family introduces vision models, integrating image encoders with language models for multimodal text and image processing. It also includes lightweight variants optimized for efficient on-device deployment, supporting an extended 128K token context length.


Other Llama 3.2 Models

Evaluation Benchmarks

Ranking is for Local LLMs.

Rank

#48

BenchmarkScoreRank

0.26

18

0.26

21

Graduate-Level QA

GPQA

0.33

26

General Knowledge

MMLU

0.33

32

Rankings

Overall Rank

#48

Coding Rank

#44

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
63k
125k

VRAM Required:

Recommended GPUs

Llama 3.2 3B: Specifications and GPU VRAM Requirements