ApX logo

Llama 3.3 70B

Parameters

70B

Context Length

130K

Modality

Text

Architecture

Dense

License

Llama 3.3 Community License

Release Date

7 Dec 2024

Knowledge Cutoff

Dec 2023

Technical Specifications

Attention Structure

Grouped-Query Attention

Hidden Dimension Size

8192

Number of Layers

80

Attention Heads

64

Key-Value Heads

8

Activation Function

SwigLU

Normalization

RMS Normalization

Position Embedding

ROPE

System Requirements

VRAM requirements for different quantization methods and context sizes

Llama 3.3 70B

The Meta Llama 3.3 70B is a large language model engineered for text-based generative applications. It operates as a dense Transformer model, incorporating an optimized architectural design. This model variant is specifically instruction-tuned for dialogue, demonstrating proficiency in multilingual chat scenarios, code assistance, and synthetic data generation. Its development involved extensive pretraining on approximately 15 trillion tokens sourced from publicly available online datasets.

From an architectural perspective, Llama 3.3 70B integrates Grouped-Query Attention (GQA) to enhance inference scalability and efficiency. The model's training regimen includes supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF), which are applied to align its outputs with human preferences for helpfulness and safety. A notable feature is its extended context window, supporting up to 130,000 tokens, enabling the processing and generation of longer text sequences for advanced use cases such as long-form summarization and complex multi-turn conversations.

The model is equipped with capabilities for multilingual inputs and outputs, encompassing languages such as English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. Furthermore, it supports tool-use, providing developers with the ability to extend its functionality via custom function definitions and integration with third-party services. This design emphasizes efficiency and aims to reduce hardware requirements, thereby increasing the accessibility of high-quality AI for various applications.

About Llama 3.3

Meta's Llama 3.3 is a 70 billion parameter, multilingual large language model. It utilizes an optimized transformer architecture, incorporating Grouped-Query Attention for enhanced inference efficiency. The model features an extended 128k token context window and is designed to support quantization, facilitating deployment on varied hardware configurations.


Other Llama 3.3 Models
  • No related models available

Evaluation Benchmarks

Ranking is for Local LLMs.

Rank

#27

BenchmarkScoreRank

0.59

6

0.85

9

0.59

10

0.9

11

0.68

11

Professional Knowledge

MMLU Pro

0.69

12

Graduate-Level QA

GPQA

0.51

14

0.52

16

0.49

22

General Knowledge

MMLU

0.51

22

0.33

23

0.41

23

Rankings

Overall Rank

#27

Coding Rank

#18

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
63k
127k

VRAM Required:

Recommended GPUs