Parameters
7B
Context Length
8.192K
Modality
Text
Architecture
Dense
License
Gemma Terms of Use
Release Date
21 Feb 2024
Knowledge Cutoff
-
Attention Structure
Multi-Head Attention
Hidden Dimension Size
3072
Number of Layers
28
Attention Heads
32
Key-Value Heads
32
Activation Function
-
Normalization
RMS Normalization
Position Embedding
ROPE
VRAM requirements for different quantization methods and context sizes
Gemma is a family of lightweight, decoder-only language models developed by Google, drawing upon the same research and technology used to create the Gemini models. The 7 billion parameter variant, Gemma 1 7B, is specifically designed for text-to-text generation tasks, including question answering, summarization, and reasoning. This model employs a transformer decoder-only architecture.
Key architectural components include Multi-Head Attention (MHA) for its attention mechanism and Rotary Positional Embeddings (RoPE) for encoding positional information. The activation function utilized is GeGLU, and normalization is performed using RMSNorm. The model's training leveraged Google's fifth-generation Tensor Processing Units (TPUv5e), utilizing JAX and ML Pathways for efficient large-scale training.
Gemma 1 7B was trained on approximately 6 trillion tokens of primarily English-language data, encompassing diverse web documents, mathematical texts, and code. Data preprocessing involved stringent filtering to remove harmful or sensitive content, aligning with responsible AI development practices. The model's relatively compact size allows for deployment across various environments, from personal laptops and workstations to cloud infrastructure.
Gemma 1 is a family of lightweight, decoder-only transformer models from Google, available in 2B and 7B parameter sizes. Designed for various text generation tasks, they incorporate rotary positional embeddings, shared input/output embeddings, GEGLU activation, and RMSNorm. The 2B model uses multi-query attention, while 7B uses multi-head attention.
Ranking is for Local LLMs.
No evaluation benchmarks for Gemma 1 7B available.
Overall Rank
-
Coding Rank
-
Full Calculator
Choose the quantization method for model weights
Context Size: 1,024 tokens