Home / Technology / AI's 'Hallucinations' Now Haunt Self-Driving Cars
AI's 'Hallucinations' Now Haunt Self-Driving Cars
1 May
Summary
- Self-driving cars can exhibit unexpected 'hallucinations,' similar to AI language models.
- Hacking autonomous vehicles is theoretically possible but technically challenging.
- Softer targets like power grids pose greater security risks than hard targets like Waymo.

Self-driving car technology, while advancing, is prone to unexpected and sometimes dangerous malfunctions. Similar to 'hallucinations' observed in AI language models like ChatGPT, autonomous vehicles can make inexplicable errors. Videos circulating online depict vehicles suddenly swerving off roads or into traffic without apparent reason, highlighting that the technology is not yet perfect.
The security of these vehicles against hacking is also a significant concern. While theoretically possible for advanced AI to be used to compromise systems like Waymo's, successfully hacking individual cars would require highly sophisticated methods to confuse their environmental sensors. Experts suggest that such an endeavor would be extremely difficult, labeling systems like Waymo as 'hard targets' due to their robust cybersecurity measures.
In the broader landscape of cybersecurity threats, softer targets such as national power grids and utilities are considered more vulnerable and accessible for malicious actors seeking to cause widespread disruption. Therefore, the potential for widespread hacking of self-driving cars is viewed as less probable compared to attacks on other critical infrastructure. Popular culture also reflects these anxieties, with recent films featuring self-driving cars as weapons, tapping into primal fears of losing personal control.