ApX logo

Gemma 2 9B

Parameters

9B

Context Length

8.192K

Modality

Text

Architecture

Dense

License

Gemma License

Release Date

27 Jun 2024

Knowledge Cutoff

-

Technical Specifications

Attention Structure

Grouped-Query Attention

Hidden Dimension Size

2304

Number of Layers

42

Attention Heads

32

Key-Value Heads

8

Activation Function

SwigLU

Normalization

RMS Normalization

Position Embedding

ROPE

System Requirements

VRAM requirements for different quantization methods and context sizes

Gemma 2 9B

Gemma 2 9B is a decoder-only, text-to-text large language model developed by Google, forming part of the Gemma family of models. It is engineered to deliver efficient and high-performance language generation, primarily for English-language applications. This variant is available in both base (pre-trained) and instruction-tuned versions, making it adaptable for various natural language processing tasks. The model is designed to be accessible, enabling deployment in environments with limited computational resources, such as personal computers and local cloud infrastructure.

The architectural design of Gemma 2 9B incorporates several technical enhancements for improved performance and inference efficiency. It utilizes Rotary Position Embedding (RoPE) for effective positional encoding. A key innovation is the adoption of Grouped-Query Attention (GQA), which enhances processing efficiency. Furthermore, the model employs an interleaved attention mechanism, alternating between a sliding window attention with a 4096-token window and full global attention spanning 8192 tokens across layers, optimizing context understanding while managing computational demands. For training stability, Gemma 2 9B integrates RMSNorm for both pre-normalization and post-normalization within its layers and applies logit soft-capping. The 9B model specifically benefits from knowledge distillation during its pre-training phase, leveraging insights from larger models. The training corpus for the 9B model consisted of 8 trillion tokens, primarily from web documents, code, and mathematical content.

Gemma 2 9B is suitable for a diverse set of applications, including but not limited to content creation such as poetry, copywriting, and code generation. Its instruction-tuned variants are particularly effective for conversational agents and chatbots, supporting tasks like question answering and summarization. The model's design focuses on enabling efficient inference, allowing its use on a range of hardware, from consumer-grade GPUs to optimized cloud setups. Its open weights and permissive licensing aim to foster broad adoption and innovation within the research and developer communities.

About Gemma 2

Gemma 2 is Google's family of open large language models, offering 2B, 9B, and 27B parameter sizes. Built upon the Gemma architecture, it incorporates innovations such as interleaved local and global attention, logit soft-capping for training stability, and Grouped Query Attention for inference efficiency. The smaller models leverage knowledge distillation.


Other Gemma 2 Models

Evaluation Benchmarks

Ranking is for Local LLMs.

Rank

#42

BenchmarkScoreRank

General Knowledge

MMLU

0.71

8

0.72

13

0.82

14

0.58

16

Rankings

Overall Rank

#42

Coding Rank

#35

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
4k
8k

VRAM Required:

Recommended GPUs

Gemma 2 9B: Specifications and GPU VRAM Requirements