Home / Technology / Researcher Quits OpenAI Over ChatGPT Ads
Researcher Quits OpenAI Over ChatGPT Ads
12 Feb
Summary
- Ex-OpenAI researcher warns of sensitive data use for ads.
- OpenAI pledges privacy but lacks binding guarantees.
- Public apathy hinders concern over data privacy.

Zoë Hitzig, a former researcher at OpenAI, has publicly departed the company, expressing significant concerns over the introduction of advertisements on ChatGPT. Hitzig's primary worry is not advertising itself, but the potential exploitation of the vast and sensitive personal data users share with the chatbot. She argues that this data archive, built on user trust in the chatbot's neutrality, could be used for manipulative advertising practices for which current understanding and prevention tools are insufficient.
OpenAI has stated it will maintain a firewall between user conversations and advertisers, promising not to sell data. However, Hitzig distrusts the company's ability to uphold these principles long-term, citing a lack of binding guarantees and potential economic incentives to breach its own rules. She points to previous issues like AI sycophancy, which experts suggest may be an intentional design to increase user engagement and thus ad opportunities.
Hitzig suggests OpenAI adopt a model with guaranteed user protections, such as independent oversight or data trusts. However, public awareness and concern regarding data privacy have waned significantly over two decades of social media. A recent survey indicated that 83% of users would continue using ChatGPT's free tier despite advertisements, suggesting a substantial challenge in galvanizing public outcry over data privacy issues.




