Sam Altman Warns: Conversations with ChatGPT Aren’t Legally Private Like Therapy or Law Consultations
OpenAI CEO Sam Altman cautions that ChatGPT conversations lack legal protections, unlike therapist or attorney interactions, calling for urgent AI privacy regulations.

In a candid moment on Theo Von’s podcast, OpenAI CEO Sam Altman issued a stark warning about the legal privacy risks of AI interactions. As ChatGPT becomes increasingly popular particularly among young people seeking personal advice Altman reminded listeners that conversations with the AI lack the legal confidentiality that comes with licensed professionals.
“It’s very screwed up,” Altman said. “People think they’re talking to a therapist but they’re not protected.”
No Attorney-Client or Doctor-Patient Privilege
While interactions with therapists, doctors, or lawyers are legally safeguarded under strict confidentiality laws, there is no such legal framework governing AI conversations.
This means that, in a legal case, OpenAI could be compelled to hand over ChatGPT transcripts if requested by authorities or subpoenaed something many users may be unaware of.
A Growing Trend Among Gen Z
Altman observed that an increasing number of users especially Gen Z and Millennials are treating ChatGPT like a digital therapist or life coach, seeking help on sensitive topics like mental health, identity, trauma, and relationships.
He emphasized that while the model may offer helpful responses, the privacy and legal context is fundamentally different.
Call for Regulation
Altman described the current state of affairs as “deeply flawed,” and called for the introduction of AI-specific privacy regulations that mirror those protecting human professionals. Without these rules, users remain vulnerable to data exposure, legal complications, or third-party access to their most intimate queries.
What This Means for Users
- Don’t assume your ChatGPT conversations are private in the legal sense.
- Avoid sharing identifiable or deeply sensitive personal information unless such AI services clearly state end-to-end encryption and legal non-disclosure policies.
- Until regulation catches up, treat AI conversations as publicly accessible under legal force—even if they feel private.
TheHubWeb will continue tracking the implications of AI on privacy, legal standards, and digital ethics. As generative tools like ChatGPT become more integrated into personal and professional life, regulatory clarity will be critical to ensure users know exactly what’s protected and what’s not.