Claude Code 2.1.128: Non-Interactive claude ultrareview for CI Pipelines
Claude Code version 2.1.128 (released May 5, 2026) introduces the claude ultrareview [target] subcommand, which makes the multi-agent code review feature fully usable from CI pipelines and scripts without an interactive terminal session. The subcommand launches the same parallel cloud-hosted review as /ultrareview, blocks until the remote session completes, prints findings to stdout (with --json for machine-parseable output), and exits with code 0 on success or 1 on failure — making it straightforward to integrate into GitHub Actions, GitLab CI, or shell scripts as a quality gate before merge.
Without arguments, claude ultrareview reviews the diff between the current branch and the default branch. Developers can also pass a PR number to review a specific pull request, or a base branch to review against a custom ref. Progress messages and the live session URL go to stderr to keep stdout parseable. Pro and Max subscribers received three free trial runs expiring May 5; subsequent runs are billed per use, typically $5–$20 depending on change size.
Beyond ultrareview, 2.1.128 ships notable stability improvements: drag-and-drop image uploads no longer hang on "Pasting text…" when images fail to load, parallel read-only shell commands no longer cancel sibling operations when one fails, and MCP server reconnections now summarize re-announced tools instead of flooding the conversation context. Plugin loading gained support for .zip archive installations via --plugin-dir, expanding how teams package and distribute shared skill sets.
Version 2.1.126 (May 1) also added claude project purge for deleting local Claude Code project state and improved OAuth login handling for WSL and SSH environments — useful for teams running Claude Code on remote development machines.
Read more — Releasebot / Claude Code Changelog
IntelliJ IDEA 2026.1: ACP Registry, Java 26 Day-One Support, and Next-Edit Predictions
IntelliJ IDEA 2026.1, released in March 2026, is a major feature release that centers on deep AI agent integration. The new ACP Registry allows developers to browse and install AI agents — including Codex, Cursor, and any ACP-compatible agent — directly inside the IDE with a single click, without manually configuring endpoints or credentials. Installed agents can query data sources, commit to Git worktrees, and operate in parallel on background tasks, with the IDE serving as the orchestration surface.
Java 26 day-one support ships in 2026.1, providing full syntax highlighting, code inspections, and refactoring support for all ten JEPs in the Java 26 feature set from day of GA release. Spring developers benefit from new runtime bean inspection: the IDE can now display the actual beans injected into a running application and expose active security configurations without requiring a debugger breakpoint, making it easier to verify that Spring context wiring matches expectations.
The quota-free next-edit suggestions are the other headline feature: as you edit one file, IDEA proactively suggests corresponding changes across other files in the codebase based on the edit's logical impact. This is different from autocomplete — it's cross-file propagation — and is available without burning AI usage credits on each suggestion. Kotlin 2.3.20 support and first-class C/C++ assistance round out the language improvements.
The patch release 2026.1.1 followed in April 2026, resolving over 1,000 bugs including a WSL Python SDK regression, Gradle class-casting sync failures in large Spring projects, and WildFly server connectivity issues.
Read more — The IntelliJ IDEA Blog
Karpathy's Slopacolypse Warning: AI-Generated Content Flooding Digital Media
Andrej Karpathy, former Tesla AI director and OpenAI co-founder, issued a pointed warning in early 2026 that he labeled the "slopacolypse": his prediction that 2026 would see AI-generated slop — content that is almost right, but not quite — flooding GitHub, Substack, arXiv, X/Instagram, and digital media broadly. The prediction came alongside his own admission that his coding workflow had inverted: in November 2025 he was writing 80% of code manually; by December, AI agents were producing 80% of it. He identified December 2025 as the moment LLM agents "crossed some kind of threshold of coherence."
The concern is not that AI code is wrong, but that it is wrong in subtle ways that pass surface inspection. Karpathy described current AI models as behaving like "sloppy junior developers" — they operate on incorrect assumptions, fail to ask clarifying questions, and tend to overcomplicate solutions. The consequence for shared codebases and public repositories is an increasing volume of technically plausible but subtly defective contributions that require expert review to catch.
For developer teams, Karpathy's frame raises a practical question: as AI-generated PRs, documentation, and research multiply, what review practices actually catch the subtle errors that automated checks miss? His prescription is not to slow AI adoption, but to invest in verification infrastructure — tests, type systems, and structured review — as the indispensable counterweight to AI throughput. He also flagged "AI hype productivity theater" as a parallel risk: teams reporting AI-driven productivity gains without the engineering rigor to back them up.
Read more — Cybernews