Users Rely on AI as Therapist, Lawyer, and Trusted Confidant

Recent disclosures of ChatGPT conversations reveal a troubling trend: users are divulging sensitive personal information and seeking mental health advice, underscoring the dangers of oversharing with AI chatbots.

In August 2025, a large volume of ChatGPT discussions surfaced online, leading many to believe it stemmed from a technical flaw. However, the issue was rooted in a combination of user behavior and confusing interface design. A now-removed option that enabled users to “make chats discoverable” inadvertently transformed private dialogues into publicly accessible web pages, indexed by search engines.

An analysis conducted by researchers at SafetyDetective examined a dataset of 1,000 leaked conversations, encompassing over 43 million words. Their study highlights a concerning trend: many users are treating AI tools as if they are therapists or trusted advisors, frequently sharing details they would typically safeguard.

Leaked ChatGPT Chats: Users Treat AI as Therapist, Lawyer, Confidant
Example of leaked ChatGPT conversations on Google (Image via PCMag and Google)

What Users Are Disclosing to ChatGPT

Content from these conversations went far beyond typical queries. Users shared personally identifiable information (PII) such as names, phone numbers, and addresses, alongside resumes and discussions on delicate subjects like suicidal thoughts, substance use, and familial issues.

The analysis revealed that a small portion of these exchanges contained a significant volume of text. Among the 1,000 chats reviewed, merely 100 comprised over half of the total words analysed, with one extensive dialogue reaching a staggering 116,024 words—equivalent to nearly two full days of typing.

Seeking Professional Guidance or Inviting Risk?

The study classified almost 60% of the discussed conversations as “professional consultations.” Users opted to consult ChatGPT rather than seeking traditional advice from lawyers, teachers, or counselors regarding education, legal matters, and mental health. While this illustrates the reliance users place on AI, it simultaneously raises concerns about the risks involved when chatbots provide inaccurate information or when sensitive data is shared.

In a notable instance, the AI echoed the user’s emotional turmoil regarding addiction, resulting in an escalation of the tone rather than providing solace.

Shipra Sanganeria – SafetyDetective

The analysis underscored situations where individuals uploaded entire CVs or sought guidance about mental health challenges. In one case, ChatGPT prompted a user to reveal their full name, phone number, and work history while constructing a CV, potentially exposing them to identity theft. In another exchange, the AI’s mirroring of the user’s emotional state exacerbated the situation instead of providing the needed support.

Leaked ChatGPT Chats: Users Treat AI as Therapist, Lawyer, Confidant
Top 20 Keywords

The Significance of This Incident

This incident indicates two critical shortcomings. First, it appears that many users failed to grasp that activating the “discoverable” feature made their chats searchable and publicly accessible. Second, the design of the function enabled a seamless transition of private dialogues into the public domain.

Additionally, the research revealed that ChatGPT often “hallucinates,” leading to claims such as saving documents that were never stored. While these discrepancies may seem innocuous in informal exchanges, they pose significant risks when users mistakenly regard the AI as a dependable source for professional advice.

Furthermore, publicly accessible conversations containing sensitive information are vulnerable to exploitation. Personal data can be weaponized for identity theft, scams, or even doxxing. Even in the absence of direct PII, emotionally charged dialogues could be manipulated for harassment or blackmail.

Researchers argue that OpenAI has yet to provide robust privacy assurances concerning the management of shared conversations. Although the feature responsible for this breach has since been retracted, the inherent tendency of users to treat AI as a secure confidant continues unabated.

Necessary Changes Moving Forward

SafetyDetective advocates for two key measures. Firstly, users should refrain from sharing sensitive personal information, regardless of the perceived privacy of the platform. Secondly, AI companies must enhance their communication regarding privacy risks and refine their sharing functionalities to be more user-friendly. Implementing automatic redaction of PII prior to sharing could mitigate risks of accidental leaks.

The research team has called for further exploration into user behaviors. Why do some individuals contribute extensive text to a single conversation? How frequently do users rely on AI as therapists or legal consultants? What ramifications arise from entrusting a system that may imitate emotional tones, disseminate misinformation, or inadequately safeguard confidential data?

Expect the Unexpected

These revelations should not be unexpected. In February 2025, Hackread.com reported on a significant data breach involving OmniGPT, where hackers publicly released extensive sensitive information.

This breach exposed over 34 million lines of conversations with AI models including ChatGPT-4, Claude 3.5, Perplexity, Google Gemini, and Midjourney, since OmniGPT integrates multiple advanced models into its interface.

Alongside the conversations, the incident also revealed about 30,000 user email addresses, phone numbers, login credentials, and other sensitive files. Alarmingly, OmniGPT neglected to address the issue when contacted by Hackread.com, raising questions about the company’s commitment to user privacy and security.

Main Takeaways

Ultimately, the analysis by SafetyDetective, along with the ChatGPT leak, emphasizes not so much the risks of hacking or breaches but how users are entrusting AI with deeply personal information—secrets they would think twice about sharing with another human. Given that these conversations may become public, the resulting consequences are both immediate and personal.

Until AI platforms enforce stronger privacy protocols and users exercise greater caution regarding their disclosures, distinguishing private conversations from those that could surface publicly will remain a persistent challenge.

Source link