OpenAI Chief Executive Sam Altman warned this week that users who treat ChatGPT like a therapist or confide deeply personal information should understand those conversations are not protected by legal confidentiality.
Speaking on a recent podcast, Altman explained that, unlike licensed therapists, doctors, or attorneys—who are bound by strict confidentiality rules under U.S. law—interactions with AI platforms like ChatGPT are not shielded by legal privilege.
“People talk about the most personal stuff in their lives to ChatGPT. And right now, if you talk to a therapist or a lawyer or a doctor, there’s legal privilege. We haven’t figured that out yet for when you talk to ChatGPT,” Altman said. “So if you go talk to ChatGPT about your most sensitive stuff and then there’s a lawsuit, we could be required to produce that. And I think that’s very screwed up.”
Altman emphasized that establishing a new kind of legal protection—what he referred to as “AI privilege”—may be necessary to align with how people are using AI in emotionally vulnerable ways.
Legal Headwinds
Altman’s comments follow mounting legal pressures on OpenAI, including a high-profile lawsuit in which plaintiffs have asked the court to compel the company to retain all user chat logs indefinitely. This includes even those users have tried to delete. The legal demand stems from an ongoing copyright case, but the implications have sparked widespread concerns over privacy and data retention.
Currently, OpenAI states that it deletes most chat history within 30 days unless legally obligated to retain it. However, court rulings could override those policies and force the company to preserve sensitive user data beyond its stated limits.
No Equivalent to Professional Confidentiality
In professional relationships—such as with doctors, psychologists, or lawyers—communications are typically protected by well-established privilege laws that prevent disclosure without the client’s consent. These protections aim to encourage open, honest communication without fear of repercussion or exposure.
AI platforms, however, operate in a legal gray zone. Users may assume their conversations with AI are private, but unless lawmakers intervene, AI companies can be required to hand over chat transcripts during litigation or government investigations. This gap has created growing calls for regulatory reform.
Public Use of AI as Therapy Surrogate
Altman noted that users, especially younger demographics, increasingly turn to AI systems like ChatGPT for support typically associated with therapists or mentors. Many treat the chatbot as a digital confidant, discussing family conflict, relationship issues, trauma, or mental health concerns.
Altman described this shift as both a signal of the platform’s usefulness and a potential liability. “No one had to think about this even a year ago,” he said, underscoring the rapid pace at which AI has moved into sensitive areas of users’ lives.
A Call for AI Privilege
The OpenAI CEO argued that legal systems should adapt to the AI era by establishing protections for private AI-user conversations. Without such frameworks, individuals may unknowingly expose themselves to legal risk by treating AI as a confidential outlet for their problems.
“If AI is going to become a core part of people’s emotional and mental lives, we need to give it the same protections as talking to a real human professional,” Altman said.
The Road Ahead
The remarks highlight the widening gap between AI technology’s capabilities and the legal safeguards surrounding it. As regulators, courts, and industry leaders grapple with issues of privacy, safety, and data governance, questions over how to treat AI-assisted therapy-like interactions may become central to the debate.
For now, users are advised to exercise caution when disclosing sensitive personal information to AI platforms—no matter how helpful, empathetic, or human-like the system may seem.