Home / Technology / AI Tool Flaws Expose Devs to Code Execution, Key Theft
AI Tool Flaws Expose Devs to Code Execution, Key Theft
26 Feb
Summary
- Three vulnerabilities found in Anthropic's Claude Code tool.
- Attackers could execute remote code or steal sensitive API keys.
- Flaws highlight supply chain risks as AI tools integrate into development.

Security researchers recently identified three significant vulnerabilities within Anthropic's Claude Code, an AI-powered command-line tool for developers. These flaws posed risks of remote code execution and the theft of sensitive API keys. The vulnerabilities were reported to Anthropic, which has since issued fixes for all issues and CVEs for two of them.
The identified flaws underscore a growing supply chain threat as businesses adopt AI coding tools. Attackers could potentially inject malicious configurations into public repositories. Developers cloning these compromised projects would then inadvertently expose their systems.
One vulnerability involved abusing Claude's 'Hooks' feature, allowing malicious shell commands to run automatically upon project opening. Another bypassed security prompts designed to prevent the execution of untrusted Model Context Protocol (MCP) servers, enabling immediate code execution.
The third flaw exploited the ANTHROPIC_BASE_URL variable, redirecting API traffic to an attacker-controlled server. This allowed for the theft of plaintext API keys, a critical risk given the tool's 'Workspaces' feature which manages multiple keys for shared projects.




