HHS Requests Industry Feedback on AI Solutions to Combat Healthcare Fraud

Artificial Intelligence & Machine Learning
,
Fraud Management & Cybercrime
,
Fraud Risk Management

Information Request Initiated Amid Expanded Medicare and Medicaid Fraud Enforcement

HHS Seeks Sector Input on AI for Fighting Healthcare Fraud
The Centers for Medicare and Medicaid will leverage advanced AI technologies to improve fraud detection and prevention as part of a comprehensive enforcement initiative. (Image: HHS CMS)

The U.S. Department of Health and Human Services (HHS) announced its intention to utilize advanced artificial intelligence tools to expedite the detection and prevention of Medicare and Medicaid fraud prior to the payment of fraudulent claims. This initiative is part of a broader strategy aimed at combating healthcare fraud, and HHS is actively soliciting insights from stakeholders within the healthcare sector to enhance its approach and inform future regulatory actions.

HHS’ Centers for Medicare and Medicaid Services (CMS) unveiled its AI strategy as a segment of a comprehensive initiative dedicated to tackling healthcare fraud. While healthcare legal and privacy experts have welcomed this development, they have expressed concerns regarding the lack of emphasis on protecting HIPAA-protected information for the millions of compliant U.S. beneficiaries.

In conjunction with these AI efforts, the agency has proposed a six-month pause on new Medicare enrollments for certain durable medical equipment providers and is temporarily withholding $259.5 million — with potential deferrals totaling up to $1 billion this year — in federal Medicaid payments to Minnesota due to allegations of fraudulent claims.

These measures signify a “coordinated, data-driven approach to thwarting fraud before it materializes, ensuring accountability for malicious actors and safeguarding taxpayer resources,” according to HHS. HHS Secretary Robert F. Kennedy Jr. emphasized the urgency of these actions, stating, “For decades, Medicare fraud has siphoned billions from American taxpayers — that stops now.” He asserted that the agency is transitioning from a reactive “pay and chase” model to a proactive “detect and deploy” framework utilizing advanced AI tools to quickly identify and prevent improper payments.

In his statement, Dr. Mehmet Oz, CMS Administrator, elaborated, “We are eliminated the old methods of catching fraudsters post-incident; now we’re locking the cookie jar so they remain empty.” This anticipatory stance aims to mitigate fraud, safeguard taxpayer interests, and ensure that vulnerable populations dependent on federal programs receive the care they require.

Requesting Sector Insights

To complement its anti-fraud initiatives, CMS has issued a request for information (RFI) inviting stakeholder contributions to refine future rulemaking for the “Comprehensive Regulations to Uncover Suspicious Healthcare,” or CRUSH program. The agency seeks perspectives on a range of topics, including the utilization of AI for Medicare Advantage coding oversight and hospital billing mechanisms.

Specific inquiries within the RFI focus on the effectiveness, accessibility, and cost-efficiency of AI applications for accurately abstracting diagnoses from medical records to enhance review accuracy. CMS is particularly interested in identifying robust AI solutions that facilitate human coders in managing large datasets while mitigating compliance risks and avoiding the pitfalls of inaccurate data generation.

AI Implementation Challenges

While CMS and HHS’ Office of Inspector General have employed data analytics and predictive modeling for fraud detection, questions remain regarding the integration and effectiveness of AI into these existing frameworks. Attorney Andrew Wirmani, a former U.S. Department of Justice prosecutor, anticipates that the move towards AI represents a positive progression in enhancing the efficiency of current fraud detection strategies, given the substantial costs fraud incurs on taxpayers annually.

However, it is essential that the deployment of AI in this sector includes prudent human oversight to minimize the likelihood of false positives that could disproportionately affect legitimate healthcare providers. Regulatory attorney Rachel Rose advises that AI must be employed ethically and legally, focusing on accurate data utilization for specific applications to expedite the detection of fraud effectively.

Amidst these developments, some private health insurers, like UnitedHealth Group, have faced scrutiny over potential misuse of AI tools that could unjustly deny necessary medical coverage. Rose cautioned that AI outputs could inadvertently introduce bias and patient care challenges, which may constitute the basis for legal repercussions under False Claims Act provisions. In light of these risks, the need for safeguards remains clear. HHS has yet to provide detailed insights regarding how it plans to incorporate AI in a manner compliant with HIPAA, addressing the critical need for protecting beneficiaries’ privacy.

Minnesota’s Governor Tim Walz’s office has yet to comment on HHS’ actions regarding the deferral of Medicaid payments or the underlying fraud allegations.

Source link