Home / Business and Economy / Insurers Fleeing AI Risk: Black Box Tech Too Risky?
Insurers Fleeing AI Risk: Black Box Tech Too Risky?
24 Nov
Summary
- Major insurers want to exclude AI liabilities from policies.
- AI's unpredictable outputs are described as a 'black box'.
- Systemic risk of simultaneous AI claims worries insurers.

Leading insurers are pushing U.S. regulators to allow them to remove coverage for liabilities stemming from artificial intelligence. These companies, including AIG and Great American, express deep concern over the unpredictable outputs of AI, likening them to an unmanageable 'black box.' This cautious stance is fueled by recent events that have highlighted AI's potential for error and misuse.
Incidents such as Google's AI Overview making false accusations, leading to a significant lawsuit, and an airline honoring an invented discount by a chatbot demonstrate the tangible risks. Furthermore, a sophisticated fraud involving a digitally cloned executive resulted in a $25 million loss for a design firm. These examples underscore the growing concern over AI's reliability and the potential for costly mistakes.
The primary fear for insurers is not a single large payout, but the potential for a widespread AI system failure that could trigger thousands of claims simultaneously. This systemic risk, where one AI mishap could lead to extensive and unmanageable losses across numerous clients, is prompting the industry's urgent request for liability exclusions.



