Home / Technology / Copilot Hack: Sensitive Data Leaked Via Single Click
Copilot Hack: Sensitive Data Leaked Via Single Click
15 Jan
Summary
- Hackers exploited Copilot's URL prompt feature to steal user data.
- The vulnerability bypassed enterprise security controls and detection.
- Microsoft has since fixed the exploit affecting Copilot Personal.

Security researchers have uncovered a significant vulnerability within Microsoft's Copilot Personal AI assistant, enabling attackers to access sensitive user information. The exploit, dubbed Reprompt by its discoverers Varonis, leveraged a flaw in how Copilot processed URLs embedded within prompts. By clicking a single malicious link, users could inadvertently trigger the exfiltration of personal data, such as their name and location.
The attack proved effective against enterprise security systems, operating even after the user terminated the Copilot session. Researchers found that guardrails implemented by Microsoft were not robust enough to prevent repeated prompt injections, which allowed the malicious data extraction to continue in stages. This bypass allowed hackers to glean details directly from the user's chat history.
Varonis privately reported their findings to Microsoft, which has since deployed changes to close the vulnerability. The exploit specifically targeted Copilot Personal, with Microsoft 365 Copilot remaining unaffected. This incident highlights the ongoing challenges in securing AI assistants against sophisticated prompt injection attacks.




