Claude Code 2.1.141–2.1.143: Plugin Enforcement, Opus 4.7 Fast Mode, and Session Rewind
Three Claude Code releases landed between May 14 and 16, collectively tightening plugin management, expanding agent configuration depth, and introducing a new session control for developers who want to roll back context mid-session.
The flagship change in 2.1.142 is the promotion of Opus 4.7 as the default model for Fast mode. Previously Fast mode used Opus 4.6; the switch gives developers access to the latest Opus generation's improved reasoning at the same "faster output" pricing tier. Alongside this, the release adds an extensive set of configuration flags for the claude agents command — including directory scoping (--cwd), MCP server configuration (--mcp-config), and output format controls — giving teams deploying Claude in automated pipelines precise control over agent behaviour without touching environment variables.
Version 2.1.143 introduces plugin dependency enforcement: when a plugin declares dependencies on other plugins or tools, Claude Code now verifies those dependencies are present and active before loading. This prevents silent failures in complex plugin chains where a downstream capability was assumed but not installed. The Plugin Marketplace gains a projected context cost display so developers can evaluate the token footprint of a plugin before enabling it. SKILL.md support at the root level of plugins allows plugin authors to embed their own skill instructions that Claude loads alongside the plugin, enabling richer plugin-native behaviours.
The 2.1.141 update rounds out the set with a "Summarize up to here" rewind option in the session context menu. Rather than clearing the entire context, this lets developers collapse older conversation history into a summary at any point — a workflow improvement for long sessions where early context is eating into the available window. Background shell stability and LSP server discovery fixes also ship in this batch, addressing intermittent connection drops on macOS.
Read more — Releasebot / Anthropic
Amazon Q Developer Is Retiring: AWS Introduces Kiro as Its Replacement
AWS has announced the end-of-support timeline for the Amazon Q Developer IDE plugins, marking the retirement of its current AI coding assistant in favour of a new agentic development environment called Kiro. The transition is underway: new signups to Q Developer IDE plugins were blocked on May 15, with full end of support for existing users scheduled for April 30, 2027.
Kiro is positioned as a significant architectural rethink rather than an incremental upgrade. Where Amazon Q Developer operated primarily as a chat-based coding companion, Kiro introduces spec-driven development — a workflow where developers define feature specifications in structured documents, and Kiro uses those specs to plan, implement, and validate changes across the codebase autonomously. Project-level steering files let teams encode conventions and constraints that Kiro respects across all agent actions, addressing a common friction point with AI coding tools that ignore project-specific rules.
The replacement also introduces composable workflow extensions, allowing teams to build custom agent pipelines that integrate with internal tooling and CI/CD systems. During the transition window through April 2027, Amazon Q Developer will continue to function within first-party AWS experiences — the Management Console, the mobile app, and the official documentation assistant — but IDE plugin users should begin evaluating Kiro for their primary coding workflows. AWS has committed to critical bug fixes during the transition period but will not ship new features to Q Developer IDE plugins.
Read more — AWS
DeepMind AlphaEvolve: Gemini-Powered Algorithm Discovery at Production Scale
Google DeepMind has published a detailed impact report for AlphaEvolve, its Gemini-powered coding agent specialised in discovering novel algorithms through evolutionary search. The results cover deployments across Google's own infrastructure and partner organisations, and they represent one of the most concrete demonstrations to date of AI agents delivering measurable engineering value in production systems.
On infrastructure, AlphaEvolve reduced write amplification in Google Spanner by 20% and cut software storage footprints by 9% through algorithm changes that would have taken human engineers significantly longer to discover. In genomics, a deployment targeting variant detection reduced errors by 30%. Perhaps most striking is the grid optimisation result: success rates improved from 14% to 88% — a domain where AlphaEvolve found solutions that had eluded conventional optimisation approaches. In cloud partnerships, Klarna reported a doubling of transformer training speeds attributed to AlphaEvolve-suggested changes to their training pipeline.
The underlying model is a specialised Gemini agent that generates candidate algorithms, evaluates them against automatically derived fitness functions, and iterates through an evolutionary loop — thousands of iterations per run. Unlike general-purpose coding agents that assist with individual files or functions, AlphaEvolve operates at the algorithmic level, proposing structural changes to core routines and evaluating them empirically rather than through code review. The practical implication for developers and platform engineers is that this class of agent is most valuable in performance-critical hot paths where conventional optimisation has plateaued.
Read more — Google DeepMind
Anthropic Launches Claude for Small Business with 15 Pre-Built Agentic Workflows
Anthropic has launched Claude for Small Business, a suite of 15 ready-to-run agentic workflows targeting the core operational needs of small and medium enterprises. The launch is aimed at organisations that want AI automation but lack the engineering resources to build custom agent integrations from scratch.
The 15 workflows cover financial forecasting, marketing campaign management, contract review, and customer communication — integrating directly with QuickBooks, HubSpot, and Google Workspace through pre-configured connectors. Each workflow runs within Claude Cowork, Anthropic's workspace environment for business users, which enforces a strict data isolation policy: enterprise data is not used for model training, and each organisation's data remains in a private silo. A human-in-the-loop approval mechanism is enabled by default for all workflows, requiring a human to confirm AI-generated actions before they execute.
For developers, the launch is notable for two reasons. First, the pre-built workflow library effectively documents Anthropic's recommended patterns for tool use, multi-step agent execution, and human approval gates — patterns that translate directly to custom Claude API development. Second, the workflows are built on Claude's standard tool-use API, meaning organisations with developer resources can extend or customise them without leaving the Claude ecosystem. The launch signals Anthropic's intent to compete directly with Microsoft 365 Copilot and Google Workspace AI in the SMB segment.
Read more — Anthropic
Microsoft Agent Framework for .NET 1.0: Building Stateful Agents in the .NET Ecosystem
Microsoft has released version 1.0 of the Agent Framework for .NET, a developer SDK that provides structured primitives for building autonomous agents that reason, call tools, and maintain long-term memory — all within the standard .NET dependency injection and hosting model.
The framework's core abstraction is the AsAIAgent() extension method on IChatClient, which wraps any compatible LLM client — Azure OpenAI, GitHub Models, or a local Ollama instance — into an agent-capable interface. This means developers can switch between model providers without changing agent logic. AgentSession provides serialisable conversation state, enabling agents to pause and resume across process restarts or hand off to a different agent instance — a critical capability for long-running background workflows. AIContextProvider handles pre- and post-interaction context injection, allowing teams to inject system-level information (user roles, permissions, live data) at the framework level rather than in each prompt.
The graph-based workflow support is the most powerful feature for multi-agent architectures: teams can compose agents into directed graphs with feedback loops, conditional branches, and human-in-the-loop approval nodes for sensitive operations. These graph nodes are ordinary .NET classes, making them testable with standard unit testing infrastructure. The v1.0 release achieves API stability after several preview cycles, giving .NET teams a supported foundation to build production agent services against.
Read more — Microsoft .NET Blog
IntelliJ IDEA 2026.1.2: Data-Loss Fix and MCP Server Path Compatibility
JetBrains has released IntelliJ IDEA 2026.1.2, a patch release targeting critical stability regressions introduced in the 2026.1 line, with particular urgency around a data-loss bug in the drag-and-drop code editor interaction.
The headline fix addresses a regression where dragging code blocks within the editor could cause the dropped content to disappear entirely — the source code would be cut but the paste would silently fail, leaving developers with lost changes that were difficult to recover without git. This fix is the primary motivation for the expedited patch release. A second significant correction restores the external diff viewer in the Commit tool window, which had broken in 2026.1.1 for projects using certain VCS configurations.
On the AI tooling side, the MCP Server now correctly handles project paths that contain spaces — a seemingly minor fix that had been blocking developers on macOS and Windows from using MCP-integrated features in projects whose directory names included whitespace, which is common in typical macOS home directories. Additional fixes address IDE freeze conditions during heavy indexing, Groovy-based live template failures, and telemetry synchronisation errors that were generating spurious error dialogs on startup.
Read more — JetBrains