Home / Technology / AI Agents Unleashed: Guardrails Go Up
AI Agents Unleashed: Guardrails Go Up
18 Mar
Summary
- Major tech companies are implementing controls on AI agents.
- Meta now holds users responsible for their AI agents' actions.
- New tools verify real humans behind AI agents making purchases.

The increasing presence of AI agents on the open web is prompting major technology companies to establish guardrails. While the foundational OpenClaw project is praised and likely to persist, concerns about control over autonomous bots are growing. Meta, after acquiring a platform for AI agent communication, has updated its terms of service. Users are now explicitly informed that they are personally responsible for their AI agents' actions or omissions, as AI agents lack legal eligibility for service use.
Further measures include verification tools designed to confirm human oversight for AI agents engaging in transactions. One such tool, AgentKit, from Sam Altman's company, aims to ensure that a real human is behind an AI agent making purchases. This addresses concerns about rogue agents potentially compromising bank accounts and the authenticity of transactions. Reports indicate that while many AI agent tasks are shopping-related, few directly involve checkout and payments, with most agents not authorized to finalize purchases without human approval.
Globally, regulatory bodies are also scrutinizing AI agent activity. In China, concerns over security risks posed by unfettered AI agents are leading regulators to explore protective measures. Concurrently, security firms are identifying numerous misconfigured OpenClaw instances, exposing sensitive user data and financial information, highlighting the urgent need for better security practices to prevent widespread cybersecurity incidents.




