in

The Intersection of Artificial Intelligence and Utilization Review

The Intersection of Artificial Intelligence and Utilization Review


California is among a handful of states that seeks to regulate the use of artificial intelligence (“AI”) in connection with utilization review in the managed care space. SB 1120, sponsored by the California Medical Association, would require algorithms, AI and other software tools used for utilization review to comply with specified requirements. We continue to keep up to date on AI related law, policy and guidance. The Sheppard Mullin Healthcare Team has written on AI related topics this year and those articles are listed here: i) AI Related Developments, ii) FTC’s 2024 PrivacyCon Part 1, and iii) FTC’s 2024 PrivacyCon Part 2. Also, our Artificial Intelligence Team’s blog can be found here. Experts report that anywhere from 50 to 75% of tasks associated with utilization review can be automated. AI might be excellent at handling routine authorizations and modernizing workflows, but there is a risk of over-automation. For example, population trends of medical necessity can miss unusual clinical presentations. SB 1120 seeks to address these concerns. 

SB 1120 would require AI tools be fair and equitably applied and not discriminate including, but not limited to, based on present or predicted disability, expected length of life, quality of life or other health conditions. Additionally, AI tools must be based upon an enrollee’s medical history and individual clinical circumstances as presented by the requesting provider and not supplant healthcare provider decision-making. Health plans and insurers in California would need to file written policies and procedures with state oversight agencies, including the California Department of Managed Health Care and the California Department of Insurance, and be governed by policies with accountability for outcomes that are reviewed and revised for accuracy and reliability. 

See also  Solid Discovery Orders in the Northern District of California

Since SB 1120 was introduced in February, one key requirement in the original bill has been removed. This section would have required payors to ensure that a physician “supervise the use of [AI] decision-making tools” whenever such tools are used to “inform decisions to approve, modify, or deny requests by providers for authorization prior to, or concurrent with, the provision of health care services…” The genesis of this removal came about due to concerns that the language was ambiguous. 

SB 1120 largely aligns with requirements applicable to Medicare Advantage plans. On April 4, 2024, the Centers for Medicare and Medicaid Services (“CMS”) issued the 2025 final rule, written about here, which included requirements governing the use of prior authorization and the annual review of utilization management tools. CMS released a memo on February 6, 2024, clarifying the application of these rules. CMS made clear that a plan may use an algorithm or software tool to assist plans in making coverage determinations but the plan must ensure that the algorithm or tool complies with all applicable rules for how coverage determinations are made. CMS referenced compliance with all of the rules at 42 C.F.R. § 422.101(c) for making a determination of medical necessity. CMS stated an algorithm that based the decision on a broader data set instead of that person’s medical history, the physician’s recommendations or medical record notes would not be compliant with these rules. CMS made it clear that algorithms or AI on their own cannot be used as the basis to deny admission or downgrade to an observation stay. Again, the patient’s individual circumstances must be considered against the allowable coverage criteria. 

See also  Report Shows Dispute Resolution Process in No Surprises Act Favors Providers

Both California and CMS are concerned that AI tools can worsen discrimination and bias. In the CMS FAQ, it reminded plans of the nondiscrimination requirements of Section 1557 of the Affordable Care Act, which prohibits discrimination on the basis of race, color, national origin, sex, age, or disability in certain health programs and activities. Plans must ensure that their AI tools do not perpetuate or exacerbate existing bias or introduce new biases. 

Looking to other states, Georgia’s House Bill 887 would prohibit payors from making coverage determinations solely based on results from the use or application of AI tools. Any decision concerning “any coverage determination which resulted from the use application of” AI must be “meaningfully reviewed” by an individual with “authority to override said artificial intelligence or automated decision tools.” As of this writing, the bill is before the House Technology and Infrastructure Innovation Committee. 

New York, Oklahoma and Pennsylvania have bills that center on regulator review and requiring payors to disclose to providers and enrollees if they use or do not use AI in connection with utilization review. For example, New York’s Assembly Bill A9149 requires payors to submit “artificial intelligence-based algorithms (defined as “any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight or that can learn from experience and improve performance when exposed to data sets”) to the Department of Financial Services (“DFS”). DFS is required to implement a process that will allow them to certify that the algorithms and training data sets have minimized the risk of bias and adhere to evidence-based clinical guidelines. Additionally, payors must notify insureds and enrollees about the use or lack of use of artificial intelligence-based algorithms in the utilization review process on their Internet website. Oklahoma’s bill (House Bill 3577), similar to the New York legislation, requires insurers to disclose the use of AI on their website, to health care providers, all covered persons and the general public. The bill also mandates review of denials of healthcare providers whose practice is not limited primary healthcare services. 

See also  Understanding Flexible Spending Accounts (FSAs) and Their Benefits

In addition, many states have adopted the guidance of the National Association of Insurance Commissioners (“NAIC”) issued on December 4, 2023 – “Use of Algorithms, Predictive Models, and Artificial Intelligence Systems by Insurers.” The model guidelines provide that the use of AI should be designed to mitigate the risk that the insurer’s use of AI will result in adverse outcomes for consumers. Insurers should have robust governance, risk management controls, and internal audit functions, which all play a role in mitigating such risk including, but not limited to, unfair discrimination in outcomes resulting from predictive models and AI systems.

Plaintiffs have already starting suing payors claiming their faulty AI algorithms have improperly denied services. It will be important in the days ahead that payors carefully monitor any AI tools they utilize in connection with utilization management. We can help payors reduce risk in this area. 


#Intersection #Artificial #Intelligence #Utilization #Review

Source link

What do you think?

Written by HealthMatters

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

14 completely different causes of extra stomach fats in girls

14 completely different causes of extra stomach fats in girls

Why is Dental Insurance Important?

Why is Dental Insurance Important?