Home / Technology / Outrage Algorithm: Social Media's Harmful Secret
Outrage Algorithm: Social Media's Harmful Secret
16 Mar
Summary
- Companies allowed more harmful content after research showed outrage fueled engagement.
- Internal research revealed algorithms prioritized user attention over safety concerns.
- Decisions were made to avoid regulation and maintain political relationships.

Social media companies intentionally amplified harmful content on user feeds, according to recent whistleblower accounts. Internal research demonstrated that algorithms designed to maximize engagement by promoting outrage fueled user attention. This led to decisions that risked user safety regarding issues like violence, sexual blackmail, and terrorism.
An engineer at Meta reportedly faced directives to allow more "borderline" harmful material, such as misogyny and conspiracy theories, to compete with platforms like TikTok. This was reportedly linked to a decline in stock prices. Similarly, a TikTok employee provided insights into internal dashboards showing a prioritization of cases involving politicians over reports of harmful content featuring children, aiming to maintain strong relationships and avoid regulation.
These revelations emerged following the significant growth of TikTok and its highly engaging algorithm. A former Meta researcher noted that Instagram Reels, launched to counter TikTok, had insufficient safeguards. Internal studies shared with the BBC showed that comments on Reels contained higher rates of bullying, harassment, hate speech, and violence compared to other parts of Instagram.




