Article

An Intelligent Approach to Healthcare Fraud Prevention

With financial, regulatory, and reputational risks on the line, healthcare organizations can use AI and machine-learning technologies to prevent fraud and minimize impact.

Financial crime, fraud, waste, and abuse are surging in today’s economy and no industry is immune from this contagion. Healthcare is no exception.

The threat of healthcare fraud has only become more prevalent due to continued growth in the population of healthcare consumers, the increase in care being delivered outside of traditional care settings such as telehealth, and exponential development of resources offering health and wellness services. Moreover, as the Baby Boomer generation ages, the number of healthcare consumers — and thus the opportunities for fraud — are likely to increase even more in the next few years. Overall, opportunities to easily commit fraud by parties outside of and even within the healthcare network have become more common, leading to challenges in differentiating between good and bad actors.

The pandemic unleashed a torrent of fraudulent claims, driven (in part) by the huge sums of money allocated by the federal government for testing, treatment, and economic subsidies; changes in employment patterns, resulting in people holding multiple jobs; and the remote work trend, which led to less stringent security measures by home-based workers. The result is a more sophisticated crop of fraudsters and fraud schemes, leaving companies exposed to heretofore unknown and unforeseen risks.

Regardless of the nature of the fraud, or the element of the healthcare ecosystem in which it occurs, the impact is significant. The National Health Care Anti-Fraud Association estimated (on a conservative basis) that healthcare fraud costs the U.S. about $68 billion annually — about 3% percent of all healthcare spending in the country. Other estimates range as high as 10% of annual healthcare expenditure, or $230 billion.1

 

The Role of AI and ML in Fraud Prevention

With financial, regulatory, and reputational risks on the line, payers, providers, federal and state government agencies, and drug manufacturers must be vigilant about fraud risk management practices to prevent fraud and minimize impact.

AI and machine-learning (ML) technologies analyze vast amounts of data, making these systems extremely effective defenses in identifying and preventing fraud. Given that healthcare’s most widely used technology providers, such as Epic and Cerner, serve thousands of hospitals and payers and maintain healthcare data on hundreds of millions of patients, detecting potential fraud embedded in that data is essential.2

AI/ML can be applied to healthcare fraud detection and prevention in several ways. For example, AI/ML algorithms can analyze large volumes of healthcare data to identify patterns of both unintentional and intentional fraudulent activities, such as billing for services that were not provided or submitting duplicate claims. These algorithms can flag suspicious transactions for further investigation and help organizations detect fraud more quickly.

AI/ML also can be used to develop predictive models that identify potential fraudsters or at-risk claims. These models can analyze patterns in data to predict which claims are most likely to be fraudulent, allowing organizations to take proactive measures to detect fraud.

Additionally, unusual patterns such as unexpected spikes in billing or unusual provider behavior can be identified by AI/ML algorithms. These algorithms can flag these anomalies for further investigation, helping to detect and prevent fraud. Similarly, AI/ML can analyze claims data to identify discrepancies, errors, and anomalies that can flag suspect claims for review, helping to detect and prevent fraud before it occurs.

Payers particularly stand to benefit from employing AI/ML to prevent fraud. Detection and prevention of payer fraud requires a combination of data analysis, investigation, and collaboration between providers, payers, and law enforcement agencies. AI/ML can assist payer programs by analyzing large volumes of data to identify patterns and anomalies that may indicate fraudulent activities.

 

Fraud Exposure by Vertical Segment

Some of the more common vertical areas that are exposed to healthcare fraud are described below. Also, both payers and providers may be subject to fraud in which perpetrators use another person’s health insurance. The use of AI/ML has effective applications for these areas.

  • Payers, including Medicare, Medicaid, commercial payers, and health systems that run their own health plans, can be subject to fraudulent claims for reimbursement, which is often perpetrated by falsifying or misrepresenting information. This can include medical identity theft, which involves using another person’s medical card or information to obtain healthcare goods, services, or funds.

  • Providers may both unintentionally or intentionally attempt to submit false information to bill payers for treatment and services that were never delivered to collect reimbursement. Examples include billing for services that were not medically needed or were never provided, duplicate billing, unbundling, misrepresenting dates of service and locations of service, and soliciting or offering kickbacks.

  • Life sciences companies may be vulnerable to fraudulent billing by third-party suppliers. The forging of prescriptions, and the illegal sale of prescription medications, are other examples of fraud across the broader life sciences sector.

 

Using Synthetic Data to Simulate Fraud Prevention Scenarios

The effectiveness of AI/ML is contingent on access to high volumes of quality and relevant data. In cases where access to real-world data is limited or restricted due to privacy concerns, synthetic data — artificially generated data that mimics the characteristics and patterns of real-world data — can be used to simulate, train, and test models in a controlled environment.

For example, if available real-world data is limited or biased, synthetic data can augment the dataset and increase its size and diversity. Data privacy is a critical concern in healthcare, of course, and regulations such as HIPAA can restrict sharing of real-world patient data. Synthetic data can be used to generate datasets that mimic the characteristics of real-world data, allowing researchers and data scientists to build and test models without accessing sensitive information.

Further, synthetic data can be used to simulate different scenarios and test the performance of models under varying conditions. This includes simulating fraud schemes, such as up-coding or billing for services not rendered, to test how well the models can detect and classify such fraud.

While synthetic data can be a valuable tool for building and testing AI/ML models for healthcare fraud detection, it is essential to ensure that the synthetic data is representative of real-world data and accurately captures its characteristics and patterns. This can be achieved through careful data-generation techniques and validation against real-world data to ensure the synthetic data is high quality and useful for model development.

 

The Complexities of AI/ML in Effective Fraud Prevention

While AI and ML technologies are providing new and better tools to detect and prevent healthcare fraud, they can also be a double-edged sword. Bad actors can leverage the power of AI/ML to commit fraud at scale. For example, using natural language processing, bad actors can scan the obituaries to assume the identities of people who have passed away, and submit forged medical expenses using generative AI for reimbursement. It may take months or even years before Medicare or Medicaid systems are updated, resulting in thousands of dollars in fraud per day. The dual use of AI/ML to commit and prevent fraud is a complex challenge that requires a comprehensive strategy with foundational people, process, and technology components.

Additionally, a major advantage of AI/ML is its ability to comb through large volumes of data quickly and precisely to identify fraud, limiting the hours spent by employees in reviewing cases manually. There is tremendous potential for cost savings in this. However, these systems do not eliminate the need for human oversight. AI/ML frees people to do more sophisticated, analytical tasks, but these technologies must be continuously monitored to ensure that they are using their enormous data-mining capacities to lead to correct, actionable conclusions. Choosing the right AI/ML vendors and advisors, and effectively implementing the AI/ML system, is an important consideration.

The availability of AI and ML to address healthcare fraud could not come at a more critical time. A growing and aging population of healthcare consumers, the evolution of treatment beyond traditional settings, and continued increases in the financial resources allocated to healthcare are creating ever greater potential for fraud. In combating these new fraud threats, the weapons provided by AI/ML will be increasingly essential.

1.“NHCAA – a PRIVATE-PUBLIC PARTNERSHIP against HEALTH CARE FRAUD.” NHCAA, https://www.nhcaa.org/
2. Epic. “About Us | Epic.” Epic.com, 2019, https://www.epic.com/about

Let Us Guide You

Guidehouse is a global consultancy providing advisory, digital, and managed services to the commercial and public sectors. Purpose-built to serve the national security, financial services, healthcare, energy, and infrastructure industries, the firm collaborates with leaders to outwit complexity and achieve transformational changes that meaningfully shape the future.

Stay ahead of the curve with news, insights and updates from Guidehouse about issues relevant to your organization and its work.