Home / Technology / Chatbot Chats: Are They Private Like Doctor Visits?
Chatbot Chats: Are They Private Like Doctor Visits?
11 Apr
Summary
- Sam Altman advocates for chatbot privacy similar to legal professionals.
- Court cases are exploring attorney-client privilege for AI-generated work.
- Health-focused AI tools are emerging without full HIPAA compliance.

Sam Altman, CEO of OpenAI, advocates for greater privacy in chatbot conversations, likening them to discussions with legal or medical professionals. This push is partly motivated by ongoing legal challenges where companies are compelled to produce user chat logs as evidence. The question of whether AI-generated communications should receive legal privilege, such as attorney-client privilege, is currently being debated in federal courts, with recent rulings offering divergent perspectives.
One federal judge ruled that AI-generated legal strategy documents were not protected by attorney-client privilege, citing the AI company's privacy policy. Conversely, another judge determined that AI output could be considered 'attorney-client work product' when used as a tool by legal counsel. These cases represent early legal interpretations of artificial intelligence.
In parallel, the healthcare sector is seeing a surge in AI adoption. OpenAI launched ChatGPT Health, encouraging users to share medical histories, though this data is not HIPAA-protected. Other major tech companies are also releasing health-oriented AI companions, some with and some without HIPAA compliance, highlighting a growing trend in AI's application to health and wellness.
The expansion of AI in healthcare raises significant privacy concerns, especially with products not adhering to regulations like HIPAA. Legal experts warn against creating loopholes that could shield AI companies from accountability, emphasizing the need to balance user privacy with corporate responsibility. The evolving legal landscape underscores the complexity of integrating AI into sensitive areas like health and legal advice.