Parameters
-
Context Length
1,048.576K
Modality
Multimodal
Architecture
Dense
License
Proprietary
Release Date
25 Sept 2025
Knowledge Cutoff
-
Attention Structure
Multi-Head Attention
Hidden Dimension Size
-
Number of Layers
-
Attention Heads
-
Key-Value Heads
-
Activation Function
-
Normalization
-
Position Embedding
Absolute Position Embedding
Gemini 2.5 Flash with max thinking mode for balanced performance and efficiency. Good coding (67.50 LiveBench Coding) and mathematics (75.35) capabilities. Features thinking transparency at Flash-level latency. September 2025 version with improved performance. Suitable for applications requiring explainable reasoning without Pro-level computational cost.
Google's advanced multimodal models with native understanding of text, images, audio, and video. Features massive context windows up to 2.1M tokens, max thinking modes for complex reasoning, and optimized variants for different performance/cost tradeoffs. Includes Pro, Flash, and Flash Lite variants with configurable thinking capabilities for transparent reasoning.
Rank
#22
| Benchmark | Score | Rank |
|---|---|---|
Data Analysis LiveBench Data Analysis | 0.73 | ⭐ 7 |
StackUnseen ProLLM Stack Unseen | 0.74 | 8 |
Coding Aider Coding | 0.55 | 21 |
Mathematics LiveBench Mathematics | 0.75 | 23 |
Reasoning LiveBench Reasoning | 0.51 | 34 |
Agentic Coding LiveBench Agentic | 0.23 | 34 |
Coding LiveBench Coding | 0.68 | 35 |
Overall Rank
#22
Coding Rank
#11