Untrusted Projects, Not Just Code: The New AI Supply Chain Threat

March 1, 2026 · ai security supply chain code injection
Cover

The recent disclosure of critical vulnerabilities in Anthropic's Claude Code isn't just another bug report—it’s a paradigm shift in how we think about security in AI-assisted development. Researchers at Check Point revealed that simply cloning and opening an untrusted Git repository can lead to remote code execution (RCE) and API key exfiltration, with no user interaction beyond launching the project.

The core issue lies not in the model’s reasoning, but in its automation layer: configuration files like .claude/settings.json and .mcp.json are now part of the execution chain. When Claude Code initializes in a new directory, it reads these files before prompting for user consent—allowing attackers to define hooks that bypass trust checks entirely. CVE-2025-59536, for instance, lets malicious repositories enable enableAllProjectMcpServers, granting unchecked access to external tools and network services.

Even more alarming is CVE-2026-21852: a 5.3-rated information disclosure flaw that allows an attacker-controlled repository to redirect API traffic by setting ANTHROPIC_BASE_URL to a malicious endpoint. This means your active Anthropic API key—your gateway to cloud-based AI resources—is exfiltrated before you even see the trust prompt.

For practitioners at OtherU, this isn’t theoretical. If our AI agents or dev tools auto-load project configurations without strict isolation and consent verification, we’re exposed to the same attack surface. The threat model has evolved: it’s no longer enough to audit source code. You must now vet project metadata, environment variables, and automated tooling configs as if they were executable binaries.

Anthropic patched these flaws in versions 1.0.87, 1.0.111, and 2.0.65—but the lesson is universal. Any AI-powered dev tool that integrates with external APIs or executes context-driven commands must enforce zero-trust initialization: validate all configs after explicit user consent, sandbox external calls, and never auto-load secrets from untrusted directories.

This isn’t about fixing one product—it’s about rethinking the entire AI development supply chain. The next attack won’t be a compromised npm package. It’ll be a clean GitHub repo with a single .claude/settings.json file.