ApX logo

Gemma 3 12B

Parameters

12B

Context Length

128K

Modality

Multimodal

Architecture

Dense

License

Gemma Terms of Use

Release Date

12 Mar 2025

Knowledge Cutoff

Aug 2024

Technical Specifications

Attention Structure

Grouped-Query Attention

Hidden Dimension Size

3072

Number of Layers

42

Attention Heads

48

Key-Value Heads

12

Activation Function

-

Normalization

RMS Normalization

Position Embedding

ROPE

System Requirements

VRAM requirements for different quantization methods and context sizes

Gemma 3 12B

Gemma 3 12B is a 12-billion-parameter multimodal model developed by Google, designed to process both text and image inputs while generating textual outputs. This model is part of the Gemma family, which is built upon the foundational research and technology employed in the Gemini series of models. The architectural design features a decoder-only transformer with Grouped-Query Attention (GQA), incorporating a distinctive pattern of five local sliding window self-attention layers interleaved with one global self-attention layer. This configuration is engineered to optimize KV-cache memory utilization, thereby enhancing efficiency, particularly for longer sequences. Position embeddings are handled via Rotary Position Embeddings (RoPE), adapted with an increased base frequency for extended context windows.

Optimized for deployment across a range of hardware configurations, Gemma 3 12B can operate efficiently on single-GPU systems, workstations, laptops, and even mobile devices. Its multimodal capability is achieved through the integration of a tailored SigLIP vision encoder, which converts images into a sequence of soft tokens for processing. The model supports an expansive context length of 128,000 tokens, enabling it to process substantial amounts of information, including extensive documents and multiple images, within a single prompt. Furthermore, it offers broad multilingual support, encompassing over 140 languages.

Typical use cases for Gemma 3 12B include advanced natural language understanding and generation tasks such as question answering, comprehensive summarization, and intricate reasoning. Its multimodal capabilities extend to image interpretation, object identification within visual data, and the extraction of textual information from images, making it suitable for a diverse set of vision-language applications. The model also supports function calling, facilitating the development of natural language interfaces for programmatic interactions.

About Gemma 3

Gemma 3 is a family of open, lightweight models from Google. It introduces multimodal image and text processing, supports over 140 languages, and features extended context windows up to 128K tokens. Models are available in multiple parameter sizes for diverse applications.


Other Gemma 3 Models

Evaluation Benchmarks

Ranking is for Local LLMs.

Rank

#43

BenchmarkScoreRank

Agentic Coding

LiveBench Agentic

0.02

19

0.48

19

Professional Knowledge

MMLU Pro

0.61

19

0.42

22

Graduate-Level QA

GPQA

0.41

24

0.29

25

0.47

25

General Knowledge

MMLU

0.41

30

Rankings

Overall Rank

#43

Coding Rank

#31

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
63k
125k

VRAM Required:

Recommended GPUs