Parameters
0.27B
Context Length
32K
Modality
Text
Architecture
Dense
License
Apache 2.0
Release Date
14 Aug 2025
Knowledge Cutoff
-
Attention Structure
Multi-Head Attention
Hidden Dimension Size
-
Number of Layers
-
Attention Heads
-
Key-Value Heads
-
Activation Function
-
Normalization
-
Position Embedding
Absolute Position Embedding
VRAM requirements for different quantization methods and context sizes
A compact open-source model optimized for hyper-efficient on-device and edge applications.
Gemma 3 is a family of open, lightweight models from Google. It introduces multimodal image and text processing, supports over 140 languages, and features extended context windows up to 128K tokens. Models are available in multiple parameter sizes for diverse applications.
Ranking is for Local LLMs.
No evaluation benchmarks for Gemma 3 270M available.
Overall Rank
-
Coding Rank
-
Full Calculator
Choose the quantization method for model weights
Context Size: 1,024 tokens