ApX logoApX logo

Gemma 3 270M

Parameters

270M

Context Length

32K

Modality

Text

Architecture

Dense

License

Apache 2.0

Release Date

14 Aug 2025

Knowledge Cutoff

Aug 2024

Technical Specifications

Attention Structure

Multi-Head Attention

Hidden Dimension Size

1024

Number of Layers

12

Attention Heads

16

Key-Value Heads

16

Activation Function

GELU

Normalization

RMS Normalization

Position Embedding

Absolute Position Embedding

Gemma 3 270M

Gemma 3 270M is a compact, open-weights language model developed by Google, specifically engineered for hyper-efficient deployment on edge devices and resource-constrained environments. As the smallest member of the Gemma 3 family, it prioritizes task-specific specialization over general-purpose breadth. The model is uniquely structured with a high ratio of embedding parameters relative to its transformer blocks, facilitating a large 256k-token vocabulary that enables precise handling of rare tokens, multilingual text, and domain-specific terminology across 140+ languages.

Technically, the model utilizes a dense transformer-based architecture with 12 transformer layers and a hidden dimension size of 1024. It incorporates modern architectural improvements such as Rotary Positional Embeddings (RoPE) and RMSNorm to stabilize training and inference at scale. Unlike its larger multimodal siblings in the Gemma 3 series, the 270M variant is a text-only model optimized for low-latency execution. It features an interleaved attention structure that combines local sliding window attention with global self-attention to manage memory overhead effectively while supporting a context window of 32,768 tokens.

Designed primarily for fine-tuning, Gemma 3 270M serves as a foundation for specialized applications such as text classification, entity extraction, and intent routing. Its small memory footprint allows it to run entirely on-device, including mobile phones and IoT hardware, with minimal energy consumption. By training on a massive 6-trillion-token corpus, the model achieves high knowledge density and strong instruction-following capabilities for its size, making it a professional-grade choice for developers seeking to deploy private, local AI solutions without relying on cloud infrastructure.

About Gemma 3

Gemma 3 is a family of open, lightweight models from Google. It introduces multimodal image and text processing, supports over 140 languages, and features extended context windows up to 128K tokens. Models are available in multiple parameter sizes for diverse applications.


Other Gemma 3 Models

Evaluation Benchmarks

No evaluation benchmarks for Gemma 3 270M available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
16k
31k

VRAM Required:

Recommended GPUs