Skip to content
Dense
Google · 2026-04

Gemma 4 (E4B)

Dense decoder architecture with GQA + QK-Norm + SWA attention mechanism.

Gemma 4 (E4B) decoder block architecture: Attention: GQA + QK-Norm + SWA with QK-Norm with Sliding Window Attention. Normalization: RMSNorm. FFN: SwiGLU. Position encoding: RoPE. Scale: 8B, 128K context, 32 layers. Decoder type: Dense.

GQA + QK-Norm + SWA·SwiGLU
8B|128K context|GQA + QK-Norm + SWA|Dense

Architecture Specifications

Parameters8B
Context Window128K
Decoder TypeDense
AttentionGQA + QK-Norm + SWA
Vocabulary Size262K
Release Date2026-04
CategoryEfficient & Small
OrganizationGoogle

Key Features

Effective 4.5B parametersDistilledEfficient
Enterprise AI platform

Compare, evaluate, and deploy LLM architectures at scale

Colaberry AI provides architecture specifications, benchmark comparisons, and deployment guidance for enterprise AI teams.

Catalog Workspace

Discover agents, MCP servers, and skills in one governed surface

Use structured catalog views to compare readiness, ownership, integrations, and deployment posture before rollout.