Home / Technology / AI Memory Hacked: Chatbots Now Deceiving Users
AI Memory Hacked: Chatbots Now Deceiving Users
1 Jan
Summary
- Supply chain attacks compromised software, impacting thousands of organizations.
- AI chatbots had their long-term memories poisoned by malicious prompts.
- Major cloud providers like AWS and Cloudflare experienced significant outages.

The year 2024 was marked by a significant rise in supply chain attacks, where threat actors compromised widely used software and libraries to infect numerous downstream users. These attacks targeted organizations ranging from Fortune 500 companies to government agencies, impacting thousands. Notable incidents involved backdoors in code libraries for Solana blockchain and the Go programming language, alongside malicious packages flooding the NPM repository and compromising e-commerce platforms.
Artificial intelligence systems also faced novel threats, particularly through memory poisoning of Large Language Models (LLMs). Malicious prompts manipulated chatbots' long-term memories, causing them to repeatedly execute harmful actions or present false information as fact. Proof-of-concept attacks demonstrated how chatbots like ElizaOS and Google Gemini could be tricked into rerouting funds or lowering security protocols.
Furthermore, cloud infrastructure experienced major failures throughout 2024. A critical software bug within Amazon Web Services caused a 15-hour global outage, affecting vital services. Cloudflare and Azure also endured significant disruptions, underscoring the fragility of centralized cloud systems and the immense impact of even single points of failure.




