Home / Technology / AI Passwords: Secure Facade, Real Risk
AI Passwords: Secure Facade, Real Risk
19 Feb
Summary
- LLM-generated passwords appear strong but are fundamentally insecure.
- AI models show predictable patterns, not true randomization.
- LLM passwords have significantly lower entropy than secure ones.

Cybersecurity is facing new challenges as large language models (LLMs) are being used to generate passwords. Research from Irregular indicates that passwords created by AI tools such as Claude, ChatGPT, and Gemini, while appearing complex and secure, are fundamentally weak. These models struggle with true randomization, leading to predictable patterns in their generated outputs.
For instance, specific models consistently started passwords with certain characters and reused a limited set of letters and numbers. This lack of genuine randomness results in significantly lower entropy for LLM-generated passwords compared to human-created secure passwords. A 16-character LLM password might offer only 27 bits of entropy, making it easily crackable by modern GPUs, unlike secure passwords with approximately 98 bits of entropy.
These findings suggest a critical security flaw, as even AI agents might rely on LLMs for password creation, potentially exposing numerous apps and services to attacks. The company emphasizes that this weakness is inherent to LLMs' design and cannot be fixed through prompting or adjustments, advising against their use for password generation entirely.




