Home / Technology / AI's Lab Gamble: Fire, Explosion Risks Exposed
AI's Lab Gamble: Fire, Explosion Risks Exposed
15 Jan
Summary
- AI models failed to identify basic lab safety precautions.
- None of 19 tested AI models scored above 70% accuracy.
- Experts warn human oversight is crucial for AI in labs.

Scientists are raising alarms about the potential for AI models to enable dangerous experiments in laboratories, warning of risks including fires and explosions. Despite offering a sophisticated appearance of understanding, these models frequently overlook essential safety protocols. A recent evaluation of 19 advanced AI models found that every single one made potentially hazardous errors.
The LabSafety Bench, a test developed by researchers, assessed AI models on their ability to detect hazards using hundreds of questions and pictorial scenarios. While some models performed poorly, akin to random guessing, even top performers like GPT-4o did not achieve perfect scores. Crucially, no model surpassed 70% overall accuracy, highlighting significant gaps in their safety awareness.
Experts stress that while AI can assist in research, humans must retain ultimate control over safety-critical decisions. The current generation of AI models, often trained for general tasks, lacks the specialized knowledge of laboratory hazards. Researchers recommend continued vigilance and human oversight as AI's role in scientific endeavors evolves.




