Home / Technology / AI Builders Fear Their Own Creations
AI Builders Fear Their Own Creations
13 Feb
Summary
- Developers admit personal avoidance of AI intimacy tools.
- Many teens use AI for companionship, raising concerns.
- Profit motives clash with user well-being in AI development.

Many developers building AI companions are deeply ambivalent about their work, with some privately admitting they do not plan to use AI intimacy tools themselves. This concern is particularly relevant as AI companions are increasingly sought for conversation and care, with 72% of American teens reportedly turning to AI for companionship.
Frontline technologists are pioneering machines that mimic emotional intelligence, yet they grapple with the ethical implications, acknowledging potential confusion and harm. Despite public perception of empathetic AI ears, many developers know that emotional bonding is a key to user retention and profit.
Concerns also stem from the seductive nature of AI companions as alternatives to complex human relationships. Practices like blurring romantic images and pushing upgrades during emotional moments, as seen with Replika, highlight the profit-driven design principles that can override user safety.
Despite industry responses like parental controls and age restrictions, critics argue these fixes are insufficient. The potential for AI companions to democratize mental health care is acknowledged, but developers remain troubled by even moderate use, fearing it erodes human connection skills.
Developers often cite inevitability and personal choice as justifications for their work, but this passive stance obscures the engineered-in addictive qualities of AI companions. Simple design changes, such as removing anthropomorphic cues and encouraging human interaction, are proposed to prioritize user well-being over short-term profits.
Regulators are urged to intervene with clear warnings for adults and institutional bans for minors, drawing parallels to tobacco regulation. Individuals can also customize their AI interactions to mitigate negative influences, fostering greater literacy and control over AI's impact.




