Home / Technology / Moltbot AI: Convenience vs. Critical Security Risks
Moltbot AI: Convenience vs. Critical Security Risks
29 Jan
Summary
- Moltbot grants extensive system and account access.
- Prompt injection attacks remain an unresolved threat.
- Malicious Moltbot skills and fake repositories are emerging.

Moltbot, formerly Clawdbot, is an open-source AI assistant designed to manage digital tasks, such as email and flight check-ins. It operates by communicating via messaging apps and can access over 50 integrations, features persistent memory, and possesses browser and system control capabilities. The tool leverages AI models from Anthropic and OpenAI. Its rapid growth on platforms like GitHub, with hundreds of contributors and around 100,000 stars, highlights its viral popularity.
However, Moltbot's autonomy comes with substantial security concerns, leading Cisco to label it an "absolute nightmare." The AI requires extensive permissions to perform actions, potentially exposing user data if misconfigured or if the system is infected with malware. Researchers have identified instances where Moltbot leaked plaintext API keys and credentials through prompt injection or unsecured endpoints. Its integration with messaging apps further expands the attack surface.
Security experts warn that prompt injection attacks, where malicious instructions are hidden in content the AI processes, are a critical vulnerability. These attacks could lead to sensitive data leaks or unauthorized task execution on user machines. The rapid development has also seen the emergence of malicious Moltbot skills and fake repositories, including a Trojanized VS Code extension, demonstrating the potential for significant data theft and surveillance.
While developers are implementing new security measures, users must be confident in their Moltbot configuration. The core issue of prompt injection remains unresolved, and combining broad system access with potential malicious prompts creates a significant risk. Although Moltbot represents innovative AI advancements, users are urged to prioritize personal security over convenience, as the risk of exploitation is substantial.




