ApX logo

Gemma 1 7B

Parameters

7B

Context Length

8.192K

Modality

Text

Architecture

Dense

License

Gemma Terms of Use

Release Date

21 Feb 2024

Knowledge Cutoff

-

Technical Specifications

Attention Structure

Multi-Head Attention

Hidden Dimension Size

3072

Number of Layers

28

Attention Heads

32

Key-Value Heads

32

Activation Function

-

Normalization

RMS Normalization

Position Embedding

ROPE

System Requirements

VRAM requirements for different quantization methods and context sizes

Gemma 1 7B

Gemma is a family of lightweight, decoder-only language models developed by Google, drawing upon the same research and technology used to create the Gemini models. The 7 billion parameter variant, Gemma 1 7B, is specifically designed for text-to-text generation tasks, including question answering, summarization, and reasoning. This model employs a transformer decoder-only architecture.

Key architectural components include Multi-Head Attention (MHA) for its attention mechanism and Rotary Positional Embeddings (RoPE) for encoding positional information. The activation function utilized is GeGLU, and normalization is performed using RMSNorm. The model's training leveraged Google's fifth-generation Tensor Processing Units (TPUv5e), utilizing JAX and ML Pathways for efficient large-scale training.

Gemma 1 7B was trained on approximately 6 trillion tokens of primarily English-language data, encompassing diverse web documents, mathematical texts, and code. Data preprocessing involved stringent filtering to remove harmful or sensitive content, aligning with responsible AI development practices. The model's relatively compact size allows for deployment across various environments, from personal laptops and workstations to cloud infrastructure.

About Gemma 1

Gemma 1 is a family of lightweight, decoder-only transformer models from Google, available in 2B and 7B parameter sizes. Designed for various text generation tasks, they incorporate rotary positional embeddings, shared input/output embeddings, GEGLU activation, and RMSNorm. The 2B model uses multi-query attention, while 7B uses multi-head attention.


Other Gemma 1 Models

Evaluation Benchmarks

Ranking is for Local LLMs.

No evaluation benchmarks for Gemma 1 7B available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
4k
8k

VRAM Required:

Recommended GPUs

Gemma 1 7B: Specifications and GPU VRAM Requirements