AI Chatbots Recognized as Leading Health Technology Hazard for 2026
In a comprehensive analysis by the ECRI Institute, a prominent patient safety research organization, artificial intelligence (AI) chatbots have been identified as the foremost health technology hazard for 2026. Researchers Rob Schluth and Scott Luney highlighted that these AI tools, which have gained widespread use on personal devices, are increasingly being relied upon by patients for self-diagnosis and by clinicians for treatment options. Unlike regulated medical technologies, many of these AI tools lack the necessary validation for clinical use, creating concerns over their reliability in critical healthcare settings.
Schluth indicated that the accessibility of AI technologies on consumer devices comes with significant implications for patient safety. He noted that while these tools can produce impressive results, they often yield “questionable outcomes” that carry inherent risks. This raises important questions about the informed use of these tools, as patients and healthcare providers may not fully grasp the limitations and potential inaccuracies involved.
In addition to the challenges posed by AI tools, the research underscores that IT outages—stemming from cyberattacks, natural disasters, or other disturbances—pose considerable dangers to healthcare organizations. Luney emphasized the necessity for proactive disaster management strategies, noting that the landscape of healthcare is uniquely vulnerable to these types of disruptions.
During a recent interview, Schluth and Luney elaborated on additional issues pertinent to healthcare technology. They addressed the cybersecurity risks associated with legacy medical devices and discussed the threats posed by third-party vendors, such as those offering cloud services and software-as-a-service. Their analysis included insights into the severe ramifications of the 2024 ransomware attack on Change Healthcare, which severely disrupted services for numerous healthcare providers across the United States.
ECRI’s rigorous methodology involved vetting a wide array of potential risk factors, assessing and ranking them to compile its annual report on health technology hazards. This process revealed the pressing need for heightened awareness among healthcare professionals and business owners regarding the evolving landscape of cybersecurity threats.
Schluth, who leads program management for ECRI’s device safety group, and Luney, serving as the cybersecurity consultant lead for the institute, bring extensive experience to these critical discussions. With over two decades in healthcare technology and a focused expertise in cybersecurity for the last eight years, Luney has a deep understanding of the governance and compliance challenges facing healthcare organizations today.
As the integration of AI and technology in healthcare continues to expand, it is essential for business leaders to remain informed about potential vulnerabilities and adopt a strategic approach to cybersecurity. By leveraging frameworks such as the MITRE ATT&CK Matrix, organizations can better understand tactics like initial access and privilege escalation that may be exploited in cybersecurity incidents. The developments highlighted by ECRI serve as a crucial reminder of the importance of both technical diligence and informed decision-making in protecting patient safety and organizational integrity in an increasingly complex digital landscape.