Home / Technology / AI Agents Lack Transparency, Pose Major Risks
AI Agents Lack Transparency, Pose Major Risks
20 Feb
Summary
- Agentic AI systems exhibit significant security flaws and lack disclosure.
- Most agents fail to disclose their AI nature to users or third parties.
- Many agentic systems lack documented stop options, posing control issues.

Agentic AI technology is rapidly entering the mainstream, marked by OpenAI's hiring of Peter Steinberg, creator of the open-source framework OpenClaw. However, this advancement is overshadowed by significant security concerns.
A comprehensive survey of 30 agentic AI systems revealed a critical lack of disclosure regarding safety features and potential risks. Researchers noted persistent limitations in reporting, with most systems offering no information on crucial aspects like third-party testing or potential dangers.
Furthermore, many of these AI agents do not clearly identify themselves as artificial intelligence to end-users or other systems. This ambiguity, coupled with a lack of usage monitoring and documented stop options for several enterprise platforms, contributes to a "security nightmare" scenario.
The study, "The 2025 AI Index: Documenting Sociotechnical Features of Deployed Agentic AI Systems," found that identifying all potential issues with agentic AI is challenging due to developer reticence. This lack of transparency and control is expected to grow as agentic capabilities increase.
Examples like Perplexity's Comet browser, which allegedly misrepresents its actions as human, and HubSpot's Breeze agents, which lack documented security evaluations despite compliance certifications, illustrate these pervasive issues. OpenAI's ChatGPT Agent is noted as a positive exception for its cryptographically signed browser requests.
Developers of these powerful tools, including OpenAI, Anthropic, and Google, are urged to take responsibility for documentation, safety auditing, and implementing control measures to mitigate these serious gaps. Failure to address these shortcomings could lead to increased regulation.




