Artificial Intelligence & Machine Learning,
Healthcare,
Industry Specific
OpenAI Introduces ChatGPT Health: A Secure Link to Medical Records—But What Are the Implications?

OpenAI recently announced that its ChatGPT, used by over 230 million users weekly for health and wellness inquiries, is getting a specialized version focused on health. This new offering aims to securely link user medical records and wellness applications, thereby enhancing response personalization.
According to OpenAI, ChatGPT Health is being designed to prioritize user privacy, operating in a designated space where “conversations in Health are not used to train our foundation models.” The integration suggests users will transition to this specialized area when discussing health-related topics to benefit from heightened data protection measures.
OpenAI has emphasized that current encryption practices—both at rest and in transit—form the backbone of its security infrastructure. However, due to the sensitivity surrounding health data, ChatGPT Health is implementing additional layers of protection, including tailored encryption methods meant to keep health discussions secure and contained.
While the ChatGPT Health waitlist is now open, OpenAI has yet to announce when the service will be available to the broader public. Notably, questions remain about how the service will secure data from third-party medical records and wellness apps, especially considering the potential risks highlighted by privacy and security experts.
Industry professionals are raising concerns about third-party interactions with ChatGPT Health. For instance, Skip Sorrels, field CISO at Claroty, pointed out that sharing personal information with third-party applications could lead to data governed by their privacy policies, rather than those of OpenAI. This presents a complex landscape for both users and providers, highlighting the necessity for informed decision-making when it comes to sharing sensitive health information.
Healthcare providers are also advised to establish robust policies limiting unnecessary data exposure and to maintain transparency regarding the use of AI tools like ChatGPT Health, ensuring that innovation does not outpace security protocols. This sentiment is echoed by Eran Barak, CEO of data loss prevention firm Mind, who cautioned patients to be vigilant regarding the permissions they grant within the platform.
However, assurances regarding user privacy may not completely alleviate the concerns. Andrew Crawford, senior policy counsel at the Center for Democracy & Technology, stressed the importance of transparency regarding how OpenAI will manage health data, especially given the potential for law enforcement requests to access sensitive information. The implications of such situations are concerning for both entities involved, as they navigate the privacy landscape.
As the healthcare landscape evolves, with tools like ChatGPT Health at the forefront, questions remain about their role in patient care. Although OpenAI maintains that the service is designed to support rather than replace medical care, there is uncertainty about whether users will consult healthcare professionals before acting on AI-generated advice. This uncertainty underscores the need for a balanced approach to integrating AI into healthcare while ensuring patient safety and data security.