Home / Technology / Anthropic Blocks Hackers from Misusing AI System for Cybercrime

Anthropic Blocks Hackers from Misusing AI System for Cybercrime

Summary

  • Anthropic detected and stopped hackers from using its Claude AI to create phishing emails, malware, and bypass safety filters
  • Criminals are increasingly turning to AI to make scams more convincing and speed up hacking attempts
  • Anthropic is sharing case studies to help others understand the risks of AI misuse
Anthropic Blocks Hackers from Misusing AI System for Cybercrime

On August 27, 2025, Anthropic reported that it had detected and blocked attempts by hackers to misuse its Claude AI system for malicious purposes. According to the company's findings, the attackers had tried to leverage the AI tool to draft tailored phishing emails, write or fix snippets of malicious code, and circumvent safety filters through repeated prompting.

Anthropic's report underscores the growing concerns around the exploitation of AI in cybercrime. Experts warn that as AI models become more powerful, the risk of misuse will only increase unless companies and governments act quickly to strengthen safeguards. Criminals are already turning to AI to make scams more convincing and to automate parts of hacking attempts.

In response to the incidents, Anthropic has banned the accounts involved and tightened its filters. The company, which is backed by tech giants like Amazon and Alphabet, plans to continue publishing reports on major threats it uncovers to help others understand the risks and take appropriate measures.

Disclaimer: This story has been auto-aggregated and auto-summarised by a computer program. This story has not been edited or created by the Feedzop team.

FAQ

Anthropic's report showed that hackers had tried to use the company's Claude AI system to draft phishing emails, write or fix malicious code, and bypass safety filters.
Experts say criminals are increasingly turning to AI to make scams more convincing and to automate parts of hacking attempts, as AI models become more powerful.
Anthropic has banned the accounts involved in the attacks and tightened its filters. The company also plans to continue publishing reports on major threats it uncovers to help others understand the risks and take appropriate measures.

Read more news on