Google released Gemma 4 on April 2, 2026 under Apache 2.0 — their first fully permissive open model. The 31B dense model scores 89.2% on AIME and 80.0% on LiveCodeBench. The 26B MoE variant delivers 97% of that performance with only ~4B active parameters. Here's the full breakdown.
A CMS misconfiguration exposed ~3,000 unpublished Anthropic files, revealing Claude Mythos — a new model tier above Opus called "Capybara" with "dramatically higher scores" on coding and cybersecurity. It scores 77.8% on SWE-Bench Pro, 20 points above Claude Opus 4.6. Here's what the leak revealed.
Xiaomi's MiMo-V2-Pro launched anonymously as "Hunter Alpha" on OpenRouter, and the AI community assumed it was DeepSeek V4. It wasn't. It's a 1T-parameter MoE model with 42B active params that approaches Claude Opus 4.6 on coding at 1/5th the cost. Full review of both Pro and Flash variants.