Links are not yet activated.
To activate, add a link back to submitpr.org from your website and contact @jaycosta on Telegram,
or pay via Solana (from $19.95) for instant activation.
Google Cloud's flagship annual conference hit full stride on Day 2 (April 23, 2026) in Las Vegas, delivering a wave of infrastructure and AI platform announcements that signal Google's deepest push yet into enterprise agentic AI. From next-gen TPU silicon to a brand-new data architecture, here's everything that dropped.## The Big Picture: Google Goes Full-Stack on Agentic AI
If Day 1 set the stage with Gemini 3.1 and broad AI agent previews, Day 2 filled in the infrastructure backbone that makes those agents actually work at scale. Google's message is clear: it wants to own the full stack — chips, cloud, data, and agent orchestration — so enterprises don't have to stitch together multiple vendors.
Sundar Pichai appeared at the event to reinforce the company's commitment: Google Cloud is betting its enterprise future on agentic AI, and today's announcements are the technical proof.
::alert info
Google Cloud Next 2026 runs April 22–24 in Las Vegas. Day 3 announcements expected to focus on security and developer tools.
::end
## New Silicon: TPU 8t and TPU 8i
The headline hardware news: Google unveiled its **eighth-generation TPUs** — two purpose-built chips designed for the distinct demands of training and inference in agentic workloads.
The TPU 8t is Google's most powerful training chip ever. Built around breakthrough Inter-Chip Interconnect (ICI) technology, it can scale to **9,600 TPUs** in a single superpod — all sharing **2 petabytes of high-bandwidth memory**. That's an unprecedented pool of shared memory that lets the largest foundation models train without constant data shuffling.
Performance numbers:
- **3× the processing power** of the previous Ironwood generation
- **2× better performance per watt** — critical for cost efficiency at scale
The TPU 8i takes a different approach, optimized for the latency-sensitive demands of serving millions of concurrent AI agents. It uses a new **Boardfly topology** to directly interconnect 1,152 TPUs in a single pod, and packs **3× more on-chip SRAM** than its predecessor — enough to host large KV caches entirely on-silicon, cutting memory latency.
For full coverage, visit https://www.linos.ai/technology/google-cloud-next-2026-day-2-announcements/
About Linos NEWS: Linos NEWS (https://www.linos.ai) delivers breaking news and in-depth analysis across politics, technology, business, science, health, world affairs, sports, and entertainment.