AiPhreaks ← Back to News Feed

Gemma 4: Byte for byte, the most capable open models

By Jakub Antkiewicz

2026-04-03T15:48:01Z

Google has released Gemma 4, its latest family of open models, designed to deliver high-end reasoning and agentic capabilities on hardware ranging from mobile devices to developer workstations. The release is notable for its focus on intelligence-per-parameter, aiming to provide strong performance without requiring massive computational resources. A key aspect of this launch is the adoption of a fully permissive Apache 2.0 license, directly addressing community feedback and signaling a more competitive posture in the open model ecosystem.

The Gemma 4 family includes four sizes: two 'Effective' models for edge devices (E2B, E4B) and two larger models for workstations (26B MoE, 31B Dense). According to the company, its 31B model currently ranks as the #3 open model on the Arena AI text leaderboard. The models support multimodal inputs (vision, plus audio for the edge variants), function-calling, code generation, and context windows up to 256K tokens. The larger unquantized models are designed to fit on a single 80GB NVIDIA H100 GPU, with quantized versions available for consumer-grade hardware.

By moving to the Apache 2.0 license, Google removes a significant barrier to commercial adoption that existed with previous Gemma releases, placing it in more direct competition with other permissively licensed models from companies like Meta and Mistral. This decision, combined with broad day-one support from platforms like Hugging Face, NVIDIA, and Ollama, is intended to accelerate developer adoption. The strategy appears focused on capturing mindshare by offering a powerful, accessible, and now commercially flexible foundation for building AI applications.

Google's shift to an Apache 2.0 license for Gemma 4 is a calculated strategic move. It's less about the technical specifications and more about removing friction to compete directly for the developer loyalty and commercial deployments that have largely coalesced around other permissively licensed model families. This signals that Google now sees the open ecosystem not just as a research outlet but as a critical front in the broader AI platform war.