Home / Technology / OpenClaw: The Risky AI Tool Sparking Company Bans
OpenClaw: The Risky AI Tool Sparking Company Bans
18 Feb
Summary
- Tech executives warn staff about experimental AI tool OpenClaw.
- Companies ban OpenClaw due to security and privacy breach risks.
- OpenAI acquired OpenClaw developer, keeping the tool open source.

Tech executives are increasingly cautioning employees about the risks associated with the agentic AI tool, OpenClaw. Jason Grad of Massive issued a late-night warning to his staff about the unvetted nature of OpenClaw, highlighting its potential high risk. Similarly, a Meta executive has instructed his team to keep the tool off work laptops, citing unpredictability and potential privacy breaches.
OpenClaw, initially launched as a free, open-source tool by Peter Steinberger last November, gained significant traction as more developers contributed. Its popularity surged recently, prompting swift action from companies concerned about security. This AI tool requires basic engineering knowledge for setup and can then control a user's computer to perform various tasks.
Valere CEO Guy Pistone expressed serious concerns, noting OpenClaw's potential access to cloud services and sensitive client information. Despite initial bans, Valere allowed its research team to test the software on an old computer to identify vulnerabilities. Researchers advised limiting its control and securing its access panel with a password.
Despite safeguards, the Valere research team warned that OpenClaw can be tricked, potentially leading to data exposure. Pistone is confident that security enhancements are possible and has given a team 60 days to investigate making it business-secure.




