LocalStack 2026.03.0: Unified Docker Image and New lstk CLI
LocalStack for AWS 2026.03.0 is a significant release that simplifies how developers run and authenticate with the local AWS development environment. The most impactful change is the consolidation of localstack/localstack (Community) and localstack/localstack-pro into a single Docker image. Previously, teams had to maintain separate Docker Compose configurations depending on whether they were running the free or Pro tier. Now a single image handles both, with feature access determined by the auth token provided at startup — a much cleaner model for teams where some developers have Pro access and others do not.
The release also ships a completely rebuilt CLI called lstk, replacing the Python-based localstack CLI. The new CLI is distributed as a single binary installable via npm or Homebrew — no Python or pip required — which eliminates a common friction point in CI environments and developer machines that do not have Python in their PATH. The lstk CLI adds an interactive Terminal UI for setup workflows, browser-based authentication with automatic token storage, and enhanced log viewing. The existing localstack CLI remains functional but lstk is the recommended path going forward.
On the infrastructure simulation side, 2026.03.0 delivers real EC2 Fleet provisioning: CreateFleet and DeleteFleets now spawn actual Docker containers or Kubernetes pods rather than mock responses. Both On-Demand and Spot instance types are supported, and fleet teardown properly cleans up the underlying containers. For teams developing applications that use EC2 Fleet for batch workloads or spot-tolerant architectures, this brings local testing significantly closer to production behaviour. The release also adds EKS and CloudWatch resources to the Resource Groups Tagging API, enabling unified tag-based queries, and supports Kafka versions 4.0.x and 4.1.x.
Read more — LocalStack Blog
Google Cloud Error Reporting API MCP Server (Preview)
Google Cloud has released an Error Reporting API MCP server, now available in preview. The server exposes error reporting data — error groups, individual error events, and service-level statistics — as MCP resources and tools, allowing AI agents and assistants to query production error data as part of their context. A developer can ask an AI coding assistant to explain an error that has been appearing in production, and the assistant can retrieve the actual stack trace, affected versions, and frequency data directly from Error Reporting rather than requiring the developer to navigate the Cloud Console.
The practical application is in debugging workflows: an agent performing a code change can use the Error Reporting MCP server to verify that the errors it is trying to fix exist in production and to confirm after deployment that the error rate dropped. This complements other Google Cloud MCP servers (the MCP Toolbox for Databases released earlier this month) and reflects Google Cloud's strategy of exposing its managed services as MCP-accessible tools for AI-assisted development.
Read more — Google Cloud Documentation
Google Cloud BigQuery Reverse ETL to Spanner Reaches GA
Cloud resource connections with EXPORT DATA statements for Spanner have reached general availability, enabling reverse ETL directly from BigQuery to Spanner without custom data pipelines. Developers and data engineers can now write a BigQuery SQL query and export its results to a Spanner table using the EXPORT DATA statement with a Cloud resource connection, treating Spanner as an output destination the same way BigQuery already supports Cloud Storage exports.
The feature is useful for architectures where BigQuery serves as the analytical layer and Spanner as the transactional layer. Common patterns include materialising aggregated reporting tables back into Spanner for low-latency API access, synchronising dimension tables from a data warehouse into a serving database, and refreshing Spanner tables with pre-computed results from BigQuery ML models. Previously these workflows required Cloud Dataflow, custom Cloud Run jobs, or third-party ETL tools; the native EXPORT DATA path eliminates that infrastructure overhead.
This is part of a broader Google Cloud effort to tighten integration between its managed databases and its analytics platform, reducing the operational burden of keeping analytical and transactional stores in sync.
Read more — Google Cloud Documentation