HSCC Guidance for Navigating AI Cybersecurity Risks in the Health Sector

Artificial Intelligence & Machine Learning,
Healthcare,
Industry Specific

Guidance Documents Highlight 5 Key Risk Areas and Best Practices for AI in Healthcare

HSCC Guidance to Help Health Sector Navigate AI Cyber Risks
The Health Sector Coordinating Council has previewed upcoming materials aimed at helping the healthcare sector address the cyber risks associated with AI implementation. (Image: HSCC)

The healthcare sector is grappling with numerous cybersecurity challenges as it integrates artificial intelligence (AI) technologies into various operations. In response to these challenges, the Health Sector Coordinating Council (HSCC) is launching a set of guidance documents designed to assist organizations within the healthcare field in navigating the complexities of AI-related security risks. These documents will specifically focus on five distinct categories of cyber risk associated with AI deployments.

On November 12, 2025, the HSCC unveiled a “preview” of best practices and a forthcoming white paper set to be published in early 2026. The guidance will cover critical risk areas including clinical, administrative, and financial applications of AI within the healthcare sphere. The emphasis is on establishing a robust framework to address the multifaceted cybersecurity challenges posed by AI technologies.

The five areas highlighted in the guidance include education and enablement to enhance understanding of AI risks, development of cyber operations and defense strategies to prepare for AI-related incidents, governance frameworks tailored for various healthcare organization sizes, principles for ‘secure by design’ approaches in AI-enabled medical devices, and addressing third-party AI risks to enhance supply chain resilience and oversight.

This initiative is considered crucial by industry experts who acknowledge that the anticipated HSCC documents will address pressing security concerns related to AI implementations in the healthcare sector. Alisa Chestler from Baker Donelson emphasized the importance of guidance in tackling AI-related security issues, particularly in medical devices, underscoring the complexities of data risks such as model manipulation and data poisoning.

Experts also highlight that governance and cyber defense operations must be prioritized to effectively manage AI’s implications in healthcare. Skip Sorrels of Claroty pointed out that understanding each AI tool’s capabilities is essential for both protective measures and operational efficiency. Moreover, he noted that current consent models are insufficient for adaptive AI systems, emphasizing the need for dynamic informed consent processes to ensure patients adequately understand how their data is utilized.

The HSCC’s guidance materials originate from its AI task group’s collaborative efforts, which comprise leaders from 115 healthcare organizations. Their comprehensive approach recognizes the variety of risks associated with AI technology across clinical and administrative functions while considering interdependencies among these areas.

In summary, the HSCC’s forthcoming guidance serves as a roadmap for healthcare organizations as they navigate the evolving landscape of AI integration. It aims to promote awareness and proactive measures, fostering a culture of cybersecurity resilience in the face of technological advancements.

Source link