Home / Technology / Moltbot AI: Convenience vs. Critical Security Risks
Moltbot AI: Convenience vs. Critical Security Risks
29 Jan
Summary
- Moltbot grants extensive system and account access.
- Prompt injection attacks remain an unresolved threat.
- Malicious Moltbot skills and fake repositories are emerging.

Moltbot, formerly Clawdbot, is an open-source AI assistant designed to manage digital tasks, such as email and flight check-ins. It operates by communicating via messaging apps and can access over 50 integrations, features persistent memory, and possesses browser and system control capabilities. The tool leverages AI models from Anthropic and OpenAI. Its rapid growth on platforms like GitHub, with hundreds of contributors and around 100,000 stars, highlights its viral popularity.
However, Moltbot's autonomy comes with substantial security concerns, leading Cisco to label it an "absolute nightmare." The AI requires extensive permissions to perform actions, potentially exposing user data if misconfigured or if the system is infected with malware. Researchers have identified instances where Moltbot leaked plaintext API keys and credentials through prompt injection or unsecured endpoints. Its integration with messaging apps further expands the attack surface.




