Claude Code 2.1.139: Agent View, /goal Command, and Session Management
Claude Code version 2.1.139, released on May 11, 2026, is one of the more substantial feature releases in recent weeks, landing two capabilities that directly address persistent feedback about managing long-running agentic sessions.
Agent View is a new panel that surfaces every Claude Code session — running, blocked waiting for input, or completed — in a single scrollable list. Before this feature, developers juggling multiple parallel tasks had to rely on terminal windows or process management to track what Claude was doing and where input was needed. Agent View makes the full session inventory visible at a glance, with status indicators showing which sessions require attention and which are running autonomously.
The /goal command lets developers set a completion condition in plain language rather than repeatedly checking back or issuing follow-up prompts. Once a goal is set, Claude keeps working across turns — spawning tools, writing files, running commands — until the stated condition is met or it determines the condition is unachievable. This is meaningfully different from simply leaving a long prompt: the goal persists across the conversation lifecycle, allowing Claude to respond to intermediate results and adjust its approach while remaining focused on the end state.
Additional improvements in 2.1.139 include a /scroll-speed command for mouse wheel tuning in terminal environments, enhanced plugin details showing component inventory and per-component token costs, and transcript navigation shortcuts for quickly jumping between key moments in a session. The follow-up 2.1.140 release on May 13 tightened reliability, fixing a silent hang in /goal when specific hook configurations were active and resolving MCP settings persistence issues.
Read more — Releasebot
OpenAI Launches $4B Deployment Company for Enterprise AI Adoption
On May 11, 2026, OpenAI launched the OpenAI Deployment Company (internally called "DeployCo"), a standalone business unit backed by more than $4 billion in initial investment from 19 global investment firms, consultancies, and system integrators.
The company's stated purpose is to help enterprises move from AI experiments to production systems by embedding specialist engineers — called Forward Deployed Engineers (FDEs) — directly inside client organizations. FDEs function as in-house AI engineering teams, responsible for integration, customization, and operationalizing OpenAI models within existing enterprise workflows. This model is drawn from Palantir's playbook and represents a significant departure from the self-serve API model that has been OpenAI's primary go-to-market.
The venture acquired AI consulting firm Tomoro at launch, bringing approximately 150 engineers and AI deployment specialists to the team immediately. TPG leads the investor group, with Advent, Bain Capital, Brookfield, Goldman Sachs, SoftBank, and Warburg Pincus as co-founding partners. The timing is notable: Anthropic announced a $1.5 billion enterprise joint venture just days before, signalling that frontier AI labs are competing intensely for enterprise implementation services alongside model capability.
For developers, the practical implication is that large enterprise AI projects are increasingly being scoped and delivered by professional services arms closely aligned with model providers — which will likely shape API pricing, feature prioritization, and support tier availability for enterprise customers.
Read more — HPCwire / AIwire
Google DeepMind Reimagines the Mouse Pointer for the AI Era
On May 12, 2026, Google DeepMind published research on an AI-powered evolution of the computer mouse pointer — a system designed to understand what a user is pointing at and why, enabling natural language commands without switching between applications or writing detailed prompts.
The core premise is that pointing is already a form of grounding: when a user hovers over a paragraph or image, they are implicitly signalling interest in that content. DeepMind's system captures visual and semantic context around the pointer location and combines it with brief voice or text commands to interpret user intent. The result is an interaction model where a user can point to a chart and say "make this bigger" or point to a photo and ask "book this location" without opening a separate interface.
The research identifies four interaction principles guiding the design: maintaining workflow continuity so users never leave their current application, capturing visual context automatically, leveraging natural language shortcuts rather than formal commands, and converting pixels into interactive entities with semantic meaning. Practical capabilities demonstrated include converting photos into interactive to-do lists, turning video frames into booking links, and comparing webpage sections on request.
DeepMind is integrating the technology into Chrome and a new Googlebook laptop experience. For developers building web applications and tools, the implications are significant: if the pointer becomes a semantic interaction surface, applications may need to expose structural metadata more explicitly to support the AI layer's interpretation — similar to how accessibility markup enabled screen readers.
Read more — Google DeepMind