feedzop-word-mark-logo
searchLogin
Feedzop
homeFor YouUnited StatesUnited States
You
bookmarksYour BookmarkshashtagYour Topics
Trending
trending

Angels acquire Vaughn Grissom

trending

Michigan routs Villanova, 89-61

trending

NBA Cup Quarterfinals begin

trending

Cher releases Christmas song

trending

California farm issues egg recall

trending

Plane lands on Florida car

trending

Fed rate decision clouded

trending

Jen Shah released from prison

trending

Milana Vayntrub raises $500000

Terms of UsePrivacy PolicyAboutJobsPartner With Us

© 2025 Advergame Technologies Pvt. Ltd. ("ATPL"). Gamezop ® & Quizzop ® are registered trademarks of ATPL.

Gamezop is a plug-and-play gaming platform that any app or website can integrate to bring casual gaming for its users. Gamezop also operates Quizzop, a quizzing platform, that digital products can add as a trivia section.

Over 5,000 products from more than 70 countries have integrated Gamezop and Quizzop. These include Amazon, Samsung Internet, Snap, Tata Play, AccuWeather, Paytm, Gulf News, and Branch.

Games and trivia increase user engagement significantly within all kinds of apps and websites, besides opening a new stream of advertising revenue. Gamezop and Quizzop take 30 minutes to integrate and can be used for free: both by the products integrating them and end users

Increase ad revenue and engagement on your app / website with games, quizzes, astrology, and cricket content. Visit: business.gamezop.com

Property Code: 5571

Home / Technology / AI Labs Fail Safety Test: A Failing Grade Given

AI Labs Fail Safety Test: A Failing Grade Given

4 Dec

•

Summary

  • Leading AI developers received low grades for safety policies.
  • Existential safety received particularly dismal scores.
  • Experts warn of a widening gap between AI capability and safety.
AI Labs Fail Safety Test: A Failing Grade Given

A recent assessment by the Future of Life Institute has revealed that major AI development companies are underperforming in crucial safety areas. Prominent labs such as Google DeepMind, Anthropic, and OpenAI received grades hovering around a C, indicating a significant deficiency in their safety policies and practices.

The study focused on six criteria, including governance and accountability, with a particularly concerning outcome in "existential safety." This category, which assesses preparedness for extreme risks from advanced AI, saw group-wide dismal scores, underscoring a critical gap between AI's rapidly advancing capabilities and its safety measures.

Experts warn that this lack of robust safeguards and independent oversight leaves the industry structurally unprepared for the risks it is actively creating. The findings urge companies to move beyond lip service and implement concrete, evidence-based safeguards to prevent worst-case scenarios.

Disclaimer: This story has been auto-aggregated and auto-summarised by a computer program. This story has not been edited or created by the Feedzop team.
The Future of Life Institute assessed Google DeepMind, Anthropic, OpenAI, Meta, xAI, DeepSeek, Z.ai, and Alibaba Cloud.
Existential safety refers to companies' preparedness for managing extreme risks from future AI systems that could match or exceed human capabilities.
A recent study found that even top AI labs have low scores in existential safety, indicating they are not adequately prepared for extreme future risks.

Read more news on

Technologyside-arrowOpenAIside-arrowAnthropicside-arrow

You may also like

AI's Next Frontier: Reinforcement Learning Takes Over

11 hours ago • 5 reads

article image

AI Giants Converge: Databricks, OpenAI at Brainstorm

8 Dec • 9 reads

article image

Google CEO: AGI Risks 'Catastrophic Outcomes'

8 Dec • 13 reads

article image

OpenAI Enterprise Usage Skyrockets 8x, Saving Users Hours Daily

8 Dec • 17 reads

article image

Google Doppl Launches AI Video Shopping

1 day ago • 12 reads

article image