Understanding Data Privacy Risks in AI Implementations in Healthcare
As the integration of agentic artificial intelligence (AI) and other AI tools expands within the healthcare sector, it is imperative that healthcare providers and their vendors thoroughly comprehend the potential data privacy risks under the Health Insurance Portability and Accountability Act (HIPAA) and related regulations. Attorney Jordan Cohen from Akerman LLP highlights the critical nature of this understanding, especially when it comes to handling protected health information (PHI).
In an interview with Information Security Media Group, Cohen pointed out that exceeding permissible uses of data can result in reportable breaches if PHI is compromised. Consequently, organizations must be vigilant in their approach to data handling under the evolving landscape of AI technologies. He reiterated that many of the essential practices for compliance in AI implementations are not exclusively tailored to AI but are general best practices that have been emphasized for years.
A key consideration for organizations deploying agentic AI is maintaining a comprehensive data flow inventory. Cohen stressed the importance of diagramming and tracking how data is collected, processed, stored, and eventually leaves their systems. The interactions between vendors and this data further complicate the landscape, making it critical for organizations to have clear oversight of these processes.
Cohen articulated that these operational practices are paramount in the age of advanced AI systems. As organizations deploy AI tools with increasing frequency, the risks associated with data breaches and compliance failures only grow in significance.
In the same audio interview, Cohen elaborated on several other pivotal aspects related to the deployment of AI in healthcare. He discussed prevalent uses of agentic AI within clinical and administrative functions, noting the various types of PHI and electronic health record data most commonly utilized. Moreover, he flagged several legal and regulatory considerations that organizations must navigate, including oversight from the U.S. Food and Drug Administration (FDA), the Federal Trade Commission (FTC), updates to the HIPAA Security Rule, and applicable state privacy laws.
Technical safeguards, incident response protocols, transparency, and patient consent are also critical areas of focus that Cohen underscored. He emphasized that addressing these elements not only protects patient data but also presents opportunities to enhance overall data privacy and security within the healthcare sector.
Cohen, who leads the digital health practice at Akerman LLP, provides continuing legal counsel on transactions involving healthcare entities. His expertise spans critical areas of federal and state privacy and data security regulations, including adherence to HIPAA’s various rules and state breach notification laws. Additionally, he advises on compliance matters related to healthcare fraud and abuse regulations, including the Anti-Kickback Statute and the Stark Law.
As healthcare continues to adopt sophisticated AI technologies, understanding and mitigating the associated data privacy risks will be essential. Organizations must remain proactive in assessing their practices, ensuring compliance not only to protect patient data but also to foster trust within the industry. By doing so, they can effectively leverage AI to improve healthcare outcomes without compromising data security.
As the complexities of cybersecurity evolve, it is vital for business owners to stay informed and prepared to adapt to the challenges posed by emerging technologies. The integration of a structured approach, such as the MITRE ATT&CK framework, can assist in identifying potential tactics and techniques that may be employed by adversaries, helping organizations bolster their defenses against potential data breaches.