AWS What's Next 2026: Amazon Quick and OpenAI Models on Bedrock
Amazon's What's Next with AWS event on April 28, 2026 delivered two announcements that significantly reposition AWS in the AI-driven enterprise tooling space. The first is Amazon Quick, an AI assistant that connects across an organisation's applications, learns individual user preferences over time, and takes autonomous action on their behalf. Unlike point-in-time AI query tools, Quick maintains state and context across application boundaries, making it closer to an AI agent that operates on the user's behalf than to a chatbot. A desktop app and Free/Plus pricing tiers extend access to individual users who don't operate within an AWS account.
The second major announcement is the addition of OpenAI frontier models to Amazon Bedrock. GPT-5.5 and GPT-5.4 are now available in limited preview, with general availability expected within weeks. This partnership—underpinned by a reported $50 billion Amazon investment in OpenAI—makes AWS the exclusive third-party cloud distributor for OpenAI Frontier models. For development teams, the practical implication is that you can now access OpenAI's latest models through the same Bedrock APIs, IAM roles, and VPC configurations you already use for Anthropic Claude and other Bedrock models, rather than managing a separate OpenAI API key and billing relationship.
OpenAI's Codex coding agent is also available on Bedrock, which means you can authenticate with AWS credentials, route Codex inference through Bedrock's infrastructure, and apply Codex usage toward AWS cloud spend commitments. Bedrock Managed Agents—AWS's layer for building stateful, multi-step agentic workflows—is now GA alongside these model additions, giving teams a single AWS-native surface for orchestrating agents across any combination of supported models.
Read more — AWS Blog
Google Cloud Next 2026: Agent Development Kit, Agent Studio, and 8th-Gen TPUs
Google Cloud Next 2026, held April 22–24, centred almost entirely on the agentic AI developer experience. The Agent Development Kit (ADK) is the flagship announcement: a graph-based framework that lets developers compose networks of sub-agents with explicitly defined logic for how they collaborate, hand off tasks, and handle failures. The graph model is a deliberate departure from free-form orchestration patterns, where agent coordination logic tends to become implicit and hard to debug. ADK also integrates with Secret Manager (now GA) to prevent credentials from leaking into agent contexts and to mitigate prompt injection risks at the infrastructure layer.
Agent Studio provides a no-code interface for building and testing agents using natural language, then exporting the resulting agent definition to ADK for full code customisation. Agent Runtime, the hosting layer, delivers sub-second cold starts for deployed agents, addressing one of the most common complaints about serverless agent hosting. Together these three tools form a development-to-deployment pipeline that keeps agents in code-defined, reviewable artefacts rather than in visual-only builder tools.
The infrastructure announcement of note is the 8th-generation TPU family: TPU 8t is optimised for training workloads and TPU 8i for near-zero latency inference, together offering 3x higher compute performance and 80% better performance-per-dollar for agentic and reasoning workloads compared to the previous generation. Native PyTorch support via TorchTPU is now in preview, which is significant for teams that have avoided TPUs due to the historical requirement to use JAX or TensorFlow.
For teams integrating with existing enterprise data, Google announced Bring Your Own MCP (BYO-MCP) support in the Gemini Enterprise Agent Platform, a Cloud Storage MCP server, and a Workspace MCP Server in preview. These additions let agents consume Google Workspace data—Docs, Sheets, Drive—through the Model Context Protocol without writing custom connector code.
Read more — Google Cloud Blog