Home / Technology / Robots Read Dials with AI Superpowers
Robots Read Dials with AI Superpowers
16 Apr
Summary
- New AI model enhances robots' ability to read analog gauges.
- Robotic AI achieves 98% accuracy in instrument reading tasks.
- The AI model prioritizes safety and adherence to constraints.

Google DeepMind has unveiled its latest AI model, Gemini Robotics-ER 1.6, designed to significantly enhance robotic interaction with physical environments through 'embodied reasoning.' Announced on April 14, 2026, this model empowers robots, like Boston Dynamics' Spot, to accurately read analog instruments such as thermometers and pressure gauges.
The new AI model provides robots with 'agentic vision,' combining visual reasoning with code execution for image manipulation. This capability dramatically improves instrument reading accuracy, boosting performance from 23% with the older 1.5 model to an impressive 98% with the new 1.6 model. Even without agentic vision, the baseline 1.6 model achieves 86% accuracy.
Beyond reading instruments, Gemini Robotics-ER 1.6 also offers improved 'multi-view reasoning' by processing multiple camera streams for a better environmental understanding. Google emphasizes that this is their 'safest robotics model yet,' with enhanced adherence to physical safety constraints and a better perception of risks to humans.
This advancement aims to enable robots to operate more effectively in complex, real-world environments, moving beyond repetitive factory tasks. The development holds promise for a future with more versatile and autonomous robotic workers in diverse industrial applications.