Developer Tools Digest: Claude Code xhigh Effort, Anthropic Mythos, OpenAI Codex Computer Use, 2026-04-19
ai

Developer Tools Digest: Claude Code xhigh Effort, Anthropic Mythos, OpenAI Codex Computer Use, 2026-04-19

7 min read

Claude Code 2.1.111: Opus 4.7 xhigh Effort and Auto Mode

Claude Code version 2.1.111 (April 17, 2026) ships what is likely the most developer-impactful update since the tool's rapid release cadence began: a new xhigh effort level for Claude Opus 4.7, positioned between the existing high and max settings. The level is available via the /effort command, --effort CLI flag, and the model picker, letting developers dial in performance for tasks that benefit from more thorough reasoning without paying the full cost of max mode on every interaction.

Auto mode is now available for Max subscribers using Opus 4.7, allowing the model to dynamically select its reasoning depth based on the detected complexity of each task. Two new skills shipped with this version: /ultrareview triggers a comprehensive multi-agent cloud code review, and /less-permission-prompts automatically configures commonly-needed allowlists to reduce friction during agentic sessions. Theme options were expanded with an "Auto (match terminal)" option that synchronizes Claude Code's color scheme with the host terminal, and Windows PowerShell tool support received a round of stability improvements. Version 2.1.114 followed on April 19 with a crash fix in the permission dialog when an agent teammate requested a tool permission.

These effort controls matter because the cost and latency tradeoff of different reasoning depths is now explicit and session-configurable — developers running exploratory tasks can stay on high, then elevate to xhigh for final review passes without changing their model selection.

Read more — Releasebot


Anthropic Mythos: Restricted Frontier Model and Project Glasswing

Anthropic announced the Mythos model on April 7, describing it as "strikingly capable" in cybersecurity — capable of surpassing human experts at identifying and exploiting computer vulnerabilities. The UK's AI Security Institute reviewed the model and characterized it as "a step up" over previous frontier versions, leading Anthropic to limit access to a small set of select customers rather than releasing it broadly, a significant departure from their standard API rollout approach.

The capability concerns prompted Anthropic to form Project Glasswing, an initiative bringing together Amazon, Apple, Google, Microsoft, and JPMorgan Chase to help critical infrastructure defend against threats that advanced models like Mythos could enable. Co-founder Jack Clark noted that comparable models from other frontier labs are expected to appear within months, with open-weight Chinese alternatives potentially following within 18 months — framing Glasswing as an urgent industry-wide response window rather than a long-term moat.

The White House became involved directly: Anthropic CEO Dario Amodei met with Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent on April 17 to discuss controlled federal agency access, with the Office of Management and Budget working on protective controls that would allow agencies at Defense, Treasury, Commerce, Homeland Security, Justice, and State to begin using Mythos in coming weeks.

For developers, the Mythos announcement signals a meaningful inflection in what frontier AI can do in the security domain. The restricted-access model introduces a new category of enterprise AI procurement — governed deployments with contractual use constraints — that may become the norm for the most capable models as the industry grapples with dual-use capabilities at the frontier.

Read more — Fortune


OpenAI Codex Becomes a Computer-Use Agent

OpenAI shipped a major update to Codex on April 16, fundamentally repositioning the tool from a coding assistant into a full computer-use agent for developers. The centerpiece is background computer use: Codex can now launch and operate native desktop apps on your Mac — seeing, clicking, and typing with its own cursor — while multiple parallel agent instances work simultaneously without interfering with the developer's own active session.

The developer-specific additions are substantial. Codex now integrates with GitHub for PR review comment posting, supports multiple terminal tabs, and includes SSH access to remote devboxes (currently in alpha), enabling end-to-end workflows that span local development and cloud environments from a single conversational interface. An in-app browser with direct page annotation capability strengthens the frontend iteration loop by letting Codex see rendered web output and suggest precise code changes against visual targets.

Memory capabilities allow Codex to retain preferences and context from previous sessions, and more than 90 new plugins — packaged as reusable team workflows combining skills, app integrations, and MCP servers — extend Codex's reach into services like Atlassian Rovo, CircleCI, and GitLab. Future work scheduling and thread reuse allow developers to queue long-running tasks that Codex completes autonomously.

The update represents a meaningful shift in the AI coding tool space: where most tools assist within the editor, Codex is now claiming the full developer desktop as its operating surface. This positions it directly against Claude Code's Computer Use integration (in research preview) and Cursor's background automation features, with each tool racing toward the "agentic developer environment" paradigm.

Read more — SmartScope


GitHub Copilot CLI: MCP Server Registry Install

GitHub Copilot CLI versions 1.0.25 (April 13) and 1.0.28 (April 16) deliver two useful updates for developers using AI-assisted workflows from the terminal. Version 1.0.25 introduces guided MCP server installation from the GitHub MCP Registry — developers can now add MCP servers to their Copilot CLI setup using the interactive /mcp add command with guided configuration, replacing manual JSON editing of the CLI config file. Remote session control via --remote or /remote is also new, allowing developers to drive their CLI sessions from other devices or automation contexts.

Version 1.0.28 includes a permissions improvement where prompts now correctly display repository paths in git submodule setups, which was a notable pain point for monorepo and embedded-submodule workflows. Rewind picker navigation was simplified to arrow keys and Enter, and a COPILOT_DISABLE_TERMINAL_TITLE environment variable was added for developers who need precise control over their terminal title in multiplexed terminal sessions.

The MCP registry installation feature aligns Copilot CLI with the broader push toward standardized tool discovery — rather than finding and configuring MCP servers from documentation or GitHub repositories manually, developers can now browse a curated registry and install servers with a guided flow. This mirrors similar MCP management UX improvements seen in Claude Code and other agentic tools in recent months.

Read more — Releasebot


Gemini Code Assist: Finish Changes and Outlines

Google introduced two new editing features to Gemini Code Assist for IntelliJ and VS Code on March 10, 2026: Finish Changes and Outlines. Finish Changes acts as a context-aware completion engine that observes in-progress code modifications and completes the task — whether the starting point is pseudocode, inline #TODO comments, or a half-written function — without requiring the developer to explicitly prompt the AI. The feature analyses surrounding code patterns and applies them consistently across files, making it particularly effective for repetitive structural changes like adding error handling or migrating API call patterns.

Outlines generates concise, high-level English summaries interleaved directly with source code in the editor, creating an always-visible navigational layer for unfamiliar codebases. Rather than switching to documentation or relying on a chat window, developers can skim the interleaved outlines to understand the intent of complex blocks before diving in. The feature is powered by Gemini 3.0 and runs locally in the IDE extension without requiring an explicit prompt.

Both features are available in the Gemini Code Assist extensions for IntelliJ and VS Code with a Gemini Code Assist subscription. Together they shift the tool's interaction model from prompt-driven to observation-driven, reducing the overhead of describing context to the AI and letting developers stay focused on the code rather than the conversation.

Read more — Google Developers Blog


Stanislav Lentsov

Written by

Stanislav Lentsov

Software Architect

You May Also Enjoy