Home / Technology / Experts Shift Gears on A.I. Risks, Embrace "Normal Technology"
Experts Shift Gears on A.I. Risks, Embrace "Normal Technology"
20 Aug
Summary
- A.I. researchers now see 10% chance of human extinction
- A.I. progress slowing due to "scaling paradox"
- OpenAI's GPT-5 debut underwhelms, CEO acknowledges "bubble"

As of August 2025, the once-frenzied predictions about the risks of artificial intelligence (A.I.) have given way to a more measured perspective. Just two years ago, a survey found that one-third to one-half of top A.I. researchers believed there was at least a 10% chance the technology could lead to human extinction or other catastrophic outcomes.
However, the tone has shifted significantly since then. While some still foresee rapid A.I. advancements, both utopian and dystopian, the technology has become more integrated into everyday life. Leading figures like Microsoft CEO Satya Nadella and former Google CEO Eric Schmidt have urged the tech industry to focus on practical A.I. applications rather than the pursuit of artificial general intelligence (AGI).
This change in sentiment is exemplified by the underwhelming debut of OpenAI's much-anticipated GPT-5 model earlier this year. OpenAI's CEO, Sam Altman, who was once a prominent prophet of superintelligence, has acknowledged that the A.I. field is in a "bubble" that will produce both significant losses and spillover benefits. Meanwhile, the former OpenAI researcher Leopold Aschenbrenner, who had predicted humanity's imminent encounter with swarming superintelligence, is now running a successful $1.5 billion A.I. hedge fund.
As A.I. becomes more ubiquitous, with over half of Americans using the technology and a third using it daily, the focus has shifted from existential risks to the practical integration of these tools into various industries and aspects of life. While the long-term implications of A.I. remain uncertain, the current consensus seems to be that it should be treated as a "normal technology" that can be controlled and harnessed for practical purposes, rather than a separate, potentially superintelligent entity.