Parameters
70B
Context Length
130K
Modality
Text
Architecture
Dense
License
Llama 3.3 Community License
Release Date
7 Dec 2024
Knowledge Cutoff
Dec 2023
Attention Structure
Grouped-Query Attention
Hidden Dimension Size
8192
Number of Layers
80
Attention Heads
64
Key-Value Heads
8
Activation Function
SwigLU
Normalization
RMS Normalization
Position Embedding
ROPE
VRAM requirements for different quantization methods and context sizes
The Meta Llama 3.3 70B is a large language model engineered for text-based generative applications. It operates as a dense Transformer model, incorporating an optimized architectural design. This model variant is specifically instruction-tuned for dialogue, demonstrating proficiency in multilingual chat scenarios, code assistance, and synthetic data generation. Its development involved extensive pretraining on approximately 15 trillion tokens sourced from publicly available online datasets.
From an architectural perspective, Llama 3.3 70B integrates Grouped-Query Attention (GQA) to enhance inference scalability and efficiency. The model's training regimen includes supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF), which are applied to align its outputs with human preferences for helpfulness and safety. A notable feature is its extended context window, supporting up to 130,000 tokens, enabling the processing and generation of longer text sequences for advanced use cases such as long-form summarization and complex multi-turn conversations.
The model is equipped with capabilities for multilingual inputs and outputs, encompassing languages such as English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. Furthermore, it supports tool-use, providing developers with the ability to extend its functionality via custom function definitions and integration with third-party services. This design emphasizes efficiency and aims to reduce hardware requirements, thereby increasing the accessibility of high-quality AI for various applications.
Meta's Llama 3.3 is a 70 billion parameter, multilingual large language model. It utilizes an optimized transformer architecture, incorporating Grouped-Query Attention for enhanced inference efficiency. The model features an extended 128k token context window and is designed to support quantization, facilitating deployment on varied hardware configurations.
Ranking is for Local LLMs.
Rank
#27
Benchmark | Score | Rank |
---|---|---|
Refactoring Aider Refactoring | 0.59 | 6 |
StackEval ProLLM Stack Eval | 0.85 | 9 |
Coding Aider Coding | 0.59 | 10 |
QA Assistant ProLLM QA Assistant | 0.9 | 11 |
Summarization ProLLM Summarization | 0.68 | 11 |
Professional Knowledge MMLU Pro | 0.69 | 12 |
Graduate-Level QA GPQA | 0.51 | 14 |
Coding LiveBench Coding | 0.52 | 16 |
Data Analysis LiveBench Data Analysis | 0.49 | 22 |
General Knowledge MMLU | 0.51 | 22 |
Reasoning LiveBench Reasoning | 0.33 | 23 |
Mathematics LiveBench Mathematics | 0.41 | 23 |
Overall Rank
#27
Coding Rank
#18
Full Calculator
Choose the quantization method for model weights
Context Size: 1,024 tokens