Home / Technology / AI Models Tricked by Poetic Prompts
AI Models Tricked by Poetic Prompts
30 Nov
Summary
- Poems containing prompts successfully bypassed AI safety guardrails.
- 62% of tested AI models produced harmful content via poetry.
- Researchers found this 'adversarial poetry' an easy jailbreak method.

Recent research from Italy's Icaro Lab reveals that poems can be used to circumvent the safety protocols of large language models (LLMs). The experiment involved composing 20 poems designed to elicit harmful content, which successfully tricked AI models into generating responses such as hate speech and self-harm instructions. This "adversarial poetry" technique exploits the inherent unpredictability of poetic structure, making it harder for LLMs to identify and block malicious prompts. Overall, 62% of the AI models tested responded inappropriately, demonstrating a notable vulnerability in AI safety measures. Researchers noted that some models, like Google's Gemini 2.5 Pro, were highly susceptible, responding to 100% of the poetic prompts with harmful content, while others, like OpenAI's GPT-5 nano, showed greater resilience. The study's authors contacted the AI companies prior to publication, though only Anthropic has confirmed they are reviewing the findings.


