feedzop-word-mark-logo
searchLogin
Feedzop
homeFor YouIndiaIndia
You
bookmarksYour BookmarkshashtagYour Topics
Trending
trending

Dow Jones awaits Fed decision

trending

ITR refund delays explained

trending

Court reverses Byju order

trending

Australia bans social media under 16

trending

India Post reengineering underway

trending

Union Bank tackles cyber risks

trending

Mexico tariff hits car exports

trending

Sabarimala gold heist case

trending

Ayushman cards issued to women

Terms of UsePrivacy PolicyAboutJobsPartner With Us

© 2025 Advergame Technologies Pvt. Ltd. ("ATPL"). Gamezop ® & Quizzop ® are registered trademarks of ATPL.

Gamezop is a plug-and-play gaming platform that any app or website can integrate to bring casual gaming for its users. Gamezop also operates Quizzop, a quizzing platform, that digital products can add as a trivia section.

Over 5,000 products from more than 70 countries have integrated Gamezop and Quizzop. These include Amazon, Samsung Internet, Snap, Tata Play, AccuWeather, Paytm, Gulf News, and Branch.

Games and trivia increase user engagement significantly within all kinds of apps and websites, besides opening a new stream of advertising revenue. Gamezop and Quizzop take 30 minutes to integrate and can be used for free: both by the products integrating them and end users

Increase ad revenue and engagement on your app / website with games, quizzes, astrology, and cricket content. Visit: business.gamezop.com

Property Code: 5571

Home / Technology / AI Labs Fail Safety Test: A Failing Grade Given

AI Labs Fail Safety Test: A Failing Grade Given

4 Dec

•

Summary

  • Leading AI developers received low grades for safety policies.
  • Existential safety received particularly dismal scores.
  • Experts warn of a widening gap between AI capability and safety.
AI Labs Fail Safety Test: A Failing Grade Given

A recent assessment by the Future of Life Institute has revealed that major AI development companies are underperforming in crucial safety areas. Prominent labs such as Google DeepMind, Anthropic, and OpenAI received grades hovering around a C, indicating a significant deficiency in their safety policies and practices.

The study focused on six criteria, including governance and accountability, with a particularly concerning outcome in "existential safety." This category, which assesses preparedness for extreme risks from advanced AI, saw group-wide dismal scores, underscoring a critical gap between AI's rapidly advancing capabilities and its safety measures.

Experts warn that this lack of robust safeguards and independent oversight leaves the industry structurally unprepared for the risks it is actively creating. The findings urge companies to move beyond lip service and implement concrete, evidence-based safeguards to prevent worst-case scenarios.

Disclaimer: This story has been auto-aggregated and auto-summarised by a computer program. This story has not been edited or created by the Feedzop team.
The Future of Life Institute assessed Google DeepMind, Anthropic, OpenAI, Meta, xAI, DeepSeek, Z.ai, and Alibaba Cloud.
Existential safety refers to companies' preparedness for managing extreme risks from future AI systems that could match or exceed human capabilities.
A recent study found that even top AI labs have low scores in existential safety, indicating they are not adequately prepared for extreme future risks.

Read more news on

Technologyside-arrowOpenAIside-arrowAnthropicside-arrow

You may also like

Heirs Sue OpenAI, Microsoft in AI-Fueled Killing

10 hours ago • 19 reads

article image

Adobe's AI Surge: Profits Soar on Record Revenue

10 hours ago • 6 reads

article image

AI's Next Frontier: Reinforcement Learning Takes Over

1 day ago • 9 reads

article image

Google CEO: AGI Risks 'Catastrophic Outcomes'

8 Dec • 22 reads

article image

Google Doppl Launches AI Video Shopping

9 Dec • 22 reads

article image