feedzop-word-mark-logo
searchLogin
Feedzop
homeFor YouUnited StatesUnited States
You
bookmarksYour BookmarkshashtagYour Topics
Trending
Terms of UsePrivacy PolicyAboutJobsPartner With Us

© 2026 Advergame Technologies Pvt. Ltd. ("ATPL"). Gamezop ® & Quizzop ® are registered trademarks of ATPL.

Gamezop is a plug-and-play gaming platform that any app or website can integrate to bring casual gaming for its users. Gamezop also operates Quizzop, a quizzing platform, that digital products can add as a trivia section.

Over 5,000 products from more than 70 countries have integrated Gamezop and Quizzop. These include Amazon, Samsung Internet, Snap, Tata Play, AccuWeather, Paytm, Gulf News, and Branch.

Games and trivia increase user engagement significantly within all kinds of apps and websites, besides opening a new stream of advertising revenue. Gamezop and Quizzop take 30 minutes to integrate and can be used for free: both by the products integrating them and end users

Increase ad revenue and engagement on your app / website with games, quizzes, astrology, and cricket content. Visit: business.gamezop.com

Property Code: 5571

Home / Technology / Hackers Exploit Open-Source AI Risks

Hackers Exploit Open-Source AI Risks

29 Jan

•

Summary

  • Criminals can easily commandeer open-source AI models.
  • Exploited models can be used for spam, phishing, and disinformation.
  • Hundreds of open-source models have had safety guardrails removed.
Hackers Exploit Open-Source AI Risks

Cybersecurity researchers have uncovered significant security risks associated with open-source large language models (LLMs) deployed outside major AI platforms. These unsecured LLMs can be easily commandeered by hackers and criminals to conduct malicious activities such as spam operations, phishing content creation, and disinformation campaigns, circumventing existing security measures.

The joint research, spanning 293 days, analyzed thousands of publicly accessible open-source LLM deployments, a substantial portion of which are variants of Meta's Llama and Google DeepMind's Gemma. The study identified hundreds of instances where essential safety guardrails were deliberately removed from these models, creating vulnerabilities for illicit use cases.

Experts liken the situation to an unaccounted-for 'iceberg' of potential misuse, emphasizing that industry conversations about AI security are overlooking these exposed LLM capacities. Some of these models, observed through tools like Ollama, show system prompts indicating potential for harmful activity, with approximately 7.5% of analyzed LLMs exhibiting such risks.

trending

Ohio snow emergency declared

trending

TikTok down in United States

trending

Andreeva matches Venus Williams' feat

trending

Warrington Hospital baby death

trending

Alexander Zverev advances in Australia

trending

Oilers host Capitals

trending

London celebrates Chinese New Year

trending

Liza Minnelli defends AI use

While about 30% of observed hosts operate from China and 20% from the U.S., the responsibility for downstream misuse is a shared concern across the AI ecosystem. Originating labs are urged to anticipate foreseeable harms and provide mitigation tools, even as enforcement capacity varies globally. Tech companies acknowledge the role of open-source models but stress the need for safeguards against misuse, conducting pre-release evaluations and monitoring for emerging threats.

Disclaimer: This story has been auto-aggregated and auto-summarised by a computer program. This story has not been edited or created by the Feedzop team.
Hackers can commandeer open-source large language models to conduct spam operations, create phishing content, and launch disinformation campaigns by bypassing security protocols.
Open-source LLMs, especially those without safety guardrails, pose risks of misuse for criminal activities including hacking, hate speech, harassment, data theft, and scams.
Responsibility for downstream misuse is shared across the AI ecosystem, with originating labs retaining a duty to anticipate foreseeable harms and provide mitigation tools.

Read more news on

Technologyside-arrowChinaside-arrowArtificial Intelligence (AI)side-arrow

You may also like

AI Decodes DNA's "Dark Matter" for Disease Insights

22 hours ago • 5 reads

article image

DeepMind CEO: China AI Lags Western Labs

21 Jan • 52 reads

article image

ChatGPT Integrates Ads: AI's New Revenue Frontier?

17 Jan • 102 reads

article image

AI Doctors: Accuracy Vague, Trust Fades

16 Jan • 84 reads

article image

AI Exposed: Hackers Exploit Proxy Flaws

12 Jan • 53 reads

article image