Learn, Connect, Growth | Tingkatkan Mutu Pelayanan Kesehatan Indonesia

AI And Health Insurance Prior Authorization: Regulators Need To Step Up Oversight

Artificial intelligence (AI)—a machine or computer’s ability to perform cognitive functions—is quickly changing many facets of American life, including how we interact with health insurance. AI is increasingly being used by health insurers to automate a host of functions, including processing prior authorization (PA) requests, managing other plan utilization management techniques, and adjudicating claims.

In contrast to the Food and Drug Administration’s (FDA’s) increasing attention to algorithms used to guide clinical decision making, there is relatively little state or federal oversight of both the development and use of algorithms by health insurers. Recently, the Centers for Medicare and Medicaid Services (CMS) recently issued its Final Rule on Interoperability and PA. While this rule aims to create more transparency on PA criteria and denials, it does not regulate how such decisions are made, such as with or without AI. Industry has taken some important steps toward self-regulation, but there is a growing need for additional mechanisms for accountability and oversight of these algorithms.

In this Forefront article, we focus on the use of AI in PA in particular, which we define as the process an insurance plan uses to make a pre-treatment coverage decision according to clinical criteria used to determine if the service is medically necessary. We will also note suggestions policy makers should consider for addressing these challenges.

Health Insurer Use Of AI For Prior Authorization: A New Frontier With Pros And Cons
First the good news. PA is a time-consuming and complex process, both for payers and providers, and AI has shown it can cut down manual processes. A 2022 McKinsey analysis suggested that AI-enabled PA could automate 50 percent to 75 percent of manual tasks associated with PA. AI can be used in different ways to streamline, speed, and reduce the overhead of coverage decision making, especially on the payer side. First, natural language processing can be used to automate the extraction of key information from submitted materials from providers. Algorithms can also be used to determine if a requested treatment and submitted justification documents comply with the medical criteria used by the plan. Lastly, algorithms can be used to triage PA decision making to an appropriate reviewer (for instance, a clinical provider employed by the insurer with relevant expertise in the treatment under review). Proponents of AI use in coverage decisions note that well implemented AI can reduce administrative overhead, make the process faster, make decisions more consistent across patients, and reduce costs by limiting human labor. For instance, Blue Cross Blue Shield of Massachusetts has employed AI tools to more efficiently process PA requests, pulling relevant health record data to reduce the burden on providers.

Now the not-so-good news. Like all applications of AI in health care, improperly implemented AI for coverage decision making, including PA, can seriously harm patients. AI tools are only as accurate as the data and algorithm inputs going into them. As the old data saying goes, “garbage in, garbage out.” In the context of PA, we already have real-world examples of what happens when AI tools are used to make plan PA decisions without accurate clinical criteria and appropriate human review. Medicare Advantage plans sold by the insurance powerhouse have been in the spotlight for the company’s use of an AI tool that led to what patients and their doctors claim were inappropriate denials of postacute care. A federal class action lawsuit filed in Minnesota against UnitedHealthcare asserts that the AI tool had a 90 percent error rate, leading to thousands of elderly and disabled Medicare beneficiaries being denied medically necessary care.

AI may incentivize expanding insurer review of claims, both at the PA stage and the claims coverage review stage. Without an automated system, generally, claims must be entered into the insurer’s system, screened by a nurse, and then manually reviewed by a medical director. This typically costs an insurer a few hundred dollars, meaning that it only makes economic sense for the insurer to review higher-cost treatments or procedures. Therefore, for lower-dollar claims, PA or claims coverage review did not make sense because the review would cost more than the claim. But because AI software can process claims quickly and is cheap to use per claim, financially insurers can extend PA and coverage review to lower-cost procedures and treatments. This suggests that we may see “review creep” as insurers increasingly use their new tools on an ever-growing catalogue of services and treatments.

Finally, AI can also replicate and exacerbate bias against marginalized communities. Ziad Obermeyer and his colleagues famously identified that a commercially available algorithm was biased against Black patients, assigning them similar levels of risk as less sick White patients and thereby reducing the number of Black patients flagged for extra care. The same concerns regarding bias in AI used by providers also apply to AI used by payers to make coverage determinations.

What Are Regulators Doing To Create Safeguards For Use Of AI In Health Insurance?
Regulatory oversight of AI in health insurance is complicated by the fragmentation of insurance oversight in the United States. The regulatory rules of the road and the state or federal entity that regulates the plan depend on whether the plan is operated through Medicare or Medicaid, part of an employer benefits package, or sold in the individual market. The CMS interoperability regulation on PA attempts to add some uniformity across these markets in terms of PA process and transparency. But there are still many regulatory gaps and opportunities for variability across insurance products and states, especially when it comes to regulating more substantive elements of PA, such as whether PA is being used to deny access to medically necessary care and treatment.

State insurance regulators have primary oversight for private insurance plans in the individual and fully insured group markets. And states have already identified prior authorization as a growing concern in health insurance regulation. Slowly but surely, states are developing principles and oversight mechanisms for AI across all lines of insurance (that is, automotive, homeowners, life, and health). These efforts have been supported by the National Association of Insurance Commissioners’ recent Model Bulletin: Use of Artificial Intelligence Systems by Insurers, which sets out a framework for governance and oversight of AI in insurance.

Colorado has been a pace car state in this area, the first state to promulgate regulations to govern use of algorithms and predictive modeling in life insurance. The state has begun the process of adapting its regulatory approach to health insurance, and its experience may be instructive for both state and federal regulators. As Colorado conducts public listening sessions with patients, subject matter experts, regulated entities, providers, and advocates, the state department of insurance has gotten an earful about how health insurance regulation may need a more nuanced approach than other lines of insurance, primarily because of the complex use of AI in utilization management decisions, including PA. The state is also grappling with the best way to handle the fact that, more often than not, the developers of AI tools are not themselves regulated entities. In other words, the data vendors and third-party software companies are not health plans themselves.

So goes Colorado on these sticky issues, so (maybe) will go other states and even federal agencies that are still in early days of AI lawmaking. For example, the California Senate recently passed a bill requiring that a licensed physician supervise the use of AI tools that are used in PA. Insurers would also be required to have publicly disclosed policies ensuring that the use of AI will be fair and equitable. Willful violations of these requirements would be a crime. States should follow Colorado’s thoughtful example to ensure that any legislation in this space has real impact.

As states and the federal government struggle to catch up with regulation needed to cabin-in AI, patients are increasingly filing class action lawsuits against insurers using automated decision-making software, including the lawsuit against UnitedHealthcare discussed above. While lawsuits are shedding light on insurance companies’ practices regarding the use of AI and algorithms, lawsuits are not an ideal mechanism to regulate AI. Lawsuits require patients to be harmed first and put the burden on patients and providers to understand when their rights have been violated and to file legal action. Furthermore, lawsuits can create a fractured regulatory landscape, where, for example, patients in California may be protected against problematic coverage determination by AI, but similar patients in Vermont may not be. Lastly, current liability frameworks are inadequate to properly regulate the implementation of medical AI, in part because products liability jurisprudence struggles to address software concerns.

Where Do We Go From Here?
We may be able to draw some lessons from the regulation of AI algorithms used by health care providers, in which regulation efforts have moved at a faster clip. Some AI algorithms used by health care providers are classified as medical devices under the Federal Food, Drug, and Cosmetic Act and regulated by the FDA. The FDA has just updated its website on AI/Machine Learning-enabled medical devices, which lists a total of 882 such devices that have already received marketing authorization from the agency. While the current regulatory framework for medical devices is not perfect for AI, excluding, for example, many clinical decision support software tools from the medical device definition and thus the FDA’s oversight, there exists at least a premarket review process that can be updated and has received attention for regulatory reform.

In contrast, when it comes to algorithmic systems, including those involving AI, it appears that insurance companies currently have a “free ride” because there is no corresponding regulatory oversight from state regulators of insurance. There is an incongruity among similar algorithms, both guiding the provision of care and being differentially regulated. This is concerning because those algorithms deployed by insurers make decisions to deny care upfront and altogether or deny coverage of expensive treatments and procedures. Thus, they make decisions that are at the heart of access to health care.

There needs to be more proactive federal and state oversight of AI used by payers, particularly use of AI to make utilization management and other coverage determinations. While the FDA is not the appropriate body to regulate these algorithms, federal and state agencies that oversee insurance should take lessons from how the FDA currently evaluates clinical algorithms and work to implement similar standards and requirements. This collaboration could also be a good opportunity for the FDA to reflect upon its current practice, optimize it, and create a guidance document with minimum standards that need to be demonstrated by manufacturers when seeking premarket review. Another approach could be a certification entity, which could set standards for good development of these products and guidance for implementation, perhaps in collaboration with the provider side of the industry. An important first step to any of these efforts is the recognition from the federal government that governing AI will take an interagency approach. Federal agencies did just this in April 2024 when they announced that five federal agencies have pledged to work together on ensuring that Americans’ civil rights are adequately protected as AI use becomes more prevalent.

Additionally, the anti-discrimination provision of the Affordable Care Act (ACA), often referred to as Section 1557, could be used to ensure that algorithms used to make PA decisions are not discriminating based on sex, race, color, national origin, age, or disability. The Biden administration recently released an updated regulation implementing 1557, which clarified that it applies to health insurers, plan design, and the use of algorithms. While the regulation focuses on algorithms used in patient care decision support tools and does not explicitly discuss the use of algorithms for claims adjudication or utilization management decisions, the 1557 provision in the ACA still arguably applies to these activities. We may see Section 1557 complaints and lawsuits challenging situations where plans relied on flawed AI tools for plan coverage decisions, which demonstrate bias against certain patients based on the protected categories listed above.

As AI use in both clinical and payer settings accelerates, there must be safeguards that ensure that AI tools are developed and used ethically and in ways that do not discriminate against vulnerable communities. It is vital that both state and federal insurance regulators proactively focus on the development of these algorithms, to prevent issues of overuse and discrimination. While partnership with industry is crucial for these efforts, self-regulation will not be enough to protect patients, and regulators must come up with clearer rules of the road that encourage innovation but put in place guardrails against abuses.

source: https://www.healthaffairs.org/content/forefront/ai-and-health-insurance-prior-authorization-regulators-need-step-up-oversight