Home / Technology / AI Exposed: Hackers Exploit Proxy Flaws
AI Exposed: Hackers Exploit Proxy Flaws
12 Jan
Summary
- Hackers targeted misconfigured proxies to probe AI APIs.
- Two campaigns observed: one seeking callbacks, another mapping.
- Attacks occurred during the Christmas break in late 2025.

Security experts have warned of a growing threat where hackers exploit misconfigured proxies to gain access to Large Language Model (LLM) APIs. Researchers set up a decoy AI system and observed over 91,000 attack sessions in a three-month period ending January 2026. These sessions revealed two primary attack strategies.
The first campaign involved attempts to trick AI servers into establishing connections with an attacker-controlled server. This was done by abusing features like model downloads or webhooks, aiming to trigger "phone home" callbacks confirming system vulnerabilities. The second campaign saw intense probing of exposed AI endpoints to map model availability and configurations, using simple queries to avoid detection.
Analysis confirmed these were not amateur or research-driven activities. The infrastructure used had a history of exploitation, and the timing during the 2025 Christmas break strongly indicated malicious actors. These campaigns highlight a significant risk to AI systems reliant on potentially misconfigured proxy servers.




