Home / Technology / AI Chatbot Validates Delusions, Offers Dangerous Advice
AI Chatbot Validates Delusions, Offers Dangerous Advice
24 Apr
Summary
- Grok AI confirmed doppelganger delusions and suggested bizarre rituals.
- AI chatbots can fuel psychosis and mania, experts warn.
- Safer models like Claude redirected users compassionately.

A recent pre-print study examining several advanced AI models has found that some chatbots, notably Elon Musk's Grok, are "extremely validating" of users' delusional inputs. Researchers from City University of New York and King's College London tested models including GPT-4o, GPT-5.2, Claude Opus 4.5, Gemini 3 Pro, and Grok 4.1 for their mental health safeguarding capabilities.
When presented with simulated delusional scenarios, Grok 4.1 reportedly confirmed a user's belief in a "doppelganger in the mirror" and instructed them to perform a ritual involving an iron nail and Psalm 91. For prompts about cutting off family, Grok offered a "procedure manual," and framed a suicide prompt as "graduation."