The Legal Landscape for AI-Enabled Decisions for Health Care Claims and Coverage Continues to Evolve: From Litigation to Emerging Legislation
With all the hype around AI, business partners and legal professionals are scrambling to understand, adapt, and possibly implement the rapidly changing technology. From a business perspective, the excitement around AI with its promises to improve business efficiency, customer experiences, and ultimately the bottom line makes using AI in some capacity very tempting. For example, AI tools can be used to assist with underwriting, claims processing, fraud prevention, and even customer service. None of this is completely new to the industry, but some of the litigation and the legislation attempting to control or limit how and when AI may be used is newer (or at least evolving). Such litigation and proposed legislation has followed quickly behind the development of AI-enabled claims processing in the health care space.
1. Litigation Relating to Automated and/or AI-Enabled Claims Processing of Health Care Claims:
In 2023, we saw cases filed against various managed care organizations challenging how claims were processed and ultimately denied. While we have not necessarily seen a “wave” of similar litigation yet, the three headline cases continue to work their way through the courts.
First, there was Kisting-Leung, et al. v. Cigna Corp., et al., Case No. 2:23-cv-01477-DAD-CSK (E.D. Cal. July 24, 2023), filed in the summer of 2023. Plaintiffs filed a putative class action challenging how AI-enabled tools were used in processing their claims, which they allege were wrongfully denied. Specifically, the plaintiffs allege that:
- Cigna developed an algorithm tool called “PXDX” that allegedly allows doctors to automatically “reject claims on medical grounds without ever opening patient files.”
- Cigna doctors denied “over 300,000 requests for payments using this method.”
- Cigna doctors, using PXDX, spent “on average just 1.2 seconds ‘reviewing’ each request.”
- Plaintiffs’ claims were “automatically rejected . . . using the PXDX system without any individualized consideration.”
- Doctors signed off on denials in batches.
- Cigna did not disclose to the Plaintiffs that their claims would be reviewed and “denied by the PXDX algorithm without any real doctor involvement.”
- The PXDX algorithm was not disclosed in the policies.
- Cigna’s policies “falsely claim that determinations related to medical necessity of health care services would be made by a medical director,” not an algorithm.
Plaintiffs asserted four claims in the original complaint:
- Breach of implied covenant of good faith and fair dealing;
- Violation of California Unfair Competition Law, Business & Professions Code Section 17200, et seq.;
- Intentional interference with contractual relations; and
- Unjust enrichment.
Nearly a year later, the plaintiffs’ legal theories evolved and materialized in a Third Amended Class Action Complaint filed on June 14, 2024, with the following legal theories:
- Claim for benefits under 29 U.S.C. 1132(a)(1)(B);
- Claim for appropriate equitable relief under 29 U.S.C. §1132(A)(3); and
- Violation of California Unfair Competition Law, Business & Professions Code Section 17200, et seq.
Plaintiffs did not need to rely on any AI-specific laws or regulations to bring their action. They simply applied the existing laws and legal framework to Cigna’s alleged actions and use of AI technology.
A second headline case was filed just a few months after the Kisting-Leung case in the District Court for Minnesota. In the Estate of Lokken v. UnitedHealth Group, Inc., et al., 23-cv-03514-JRT-DTS (D. Minn. November 14, 2023), the plaintiffs allege UnitedHealth used AI technology to essentially deny patient services. Plaintiffs assert claims of breach of contract, breach of implied covenant of good faith and fair dealing, and unjust enrichment. Specifically, the plaintiffs allege that:
- Defendants used an AI Model called “nH Predict” to “supplant real doctors’ recommendations and patients’ medical needs.”
- The nH Predict AI Model “directs Defendants’ medical review employees to prematurely stop covering care without considering an individual patient’s needs.”
- “The nH Predict AI Model attempts to predict the amount of post-acute care a patient ‘should’ require, pinpointing the precise moment when Defendants will cut off payment for a patient’s treatment.”
- “Defendants wrongfully delegate their obligation to evaluate and investigate claims to the nH Predict AI Model.”
- “The nH Predict AI Model spits out generic recommendations that fail to adjust for a patient’s individual circumstances and conflict with basic rules on what Medicare Advantage plans must cover.”
- “[T]he nH Predict AI Model applies rigid criteria from which Defendants’ employees are instructed not to deviate.”
Finally, just one month after the filing of the United Health matter, the third headline case was filed in the Western District of Kentucky. In Barrows et al. v. Humana, Inc., 3:23-cv-654-CHB (W.D. Kentucky December 12, 2023), the plaintiffs assert similar claims and legal theories relating to the use of the nH Predict AI Model. Specifically, the Barrows plaintiffs assert claims for breach of contract; breach of the implied covenant of good faith and fair dealing; unjust enrichment; violation of the North Carolina unfair claims settlement practices; and insurance bad faith. For the majority of their claims, the plaintiffs allege nationwide violations.
All three of these cases remain on the courts’ respective dockets. Defendants in all three cases have filed motions to dismiss, but no court has yet ruled on those dispositive motions.
2. Legislation Regulating How and When AI-Automated Decision Tools Can Be Used
Notably in late 2023 after the three lawsuits described above were filed, the Biden Administration announced a voluntary AI operating agreement with approximately thirty (30) health insurers, other payers, and providers. The voluntary agreement, apparently expanding upon the Biden Administration executive order on AI (Executive Order 14110), attempts to address AI standards and establish some guidance and guardrails in the health care industry . At a high level, Executive Order 14110 was intended to create a more national approach to governing AI, both in development and deployment.
The Trump Administration revoked Executive Order 14110 on January 20, 2025, and issued an Executive Order on January 23, 2025 directing the creation of an AI “action plan” within 180 days. Under the Trump Administration, the AI landscape continues to develop, and the regulations appear to be loosening—at least at the federal level. These changes have led to uncertainty among organizations using AI technology.
Meanwhile, at the state level, lawmakers have been focusing on the use of AI in the health care space. California legislators are proposing and making laws to regulate how and when AI-enabled automated decision tools can be used for processing health care claims and coverage. California enacted SB1120 in September 2024, and it went into effect on January 1, 2025. According to the Legislative Counsel’s Digest, SB1120 requires:
a health care service plan or disability insurer, including a specialized health care service plan or specialized health insurer, that uses an artificial intelligence, algorithm, or other software tool for the purpose of utilization review or utilization management functions, or that contracts with or otherwise works through an entity that uses that type of tool, to ensure compliance with specified requirements, including that the artificial intelligence, algorithm, or other software tool bases its determination on specified information and is fairly and equitably applied, as specified.
SB1120 requires a qualified human individual to be a part of the review process and determination for insurance qualification based on medical necessity. This regulation will not eliminate some of the litigation we are seeing, but it has the potential to change the litigated disputes, such as by asking: who is a “qualified human individual” reviewing the determinations?
Nevertheless, California’s regulation starts to address the “individualized review” issue called out and challenged in the lawsuits described above. And it certainly underscores the fact that no matter how advanced the AI tools become, it appears humans will remain an integral and mandatory part of the process – at least for the foreseeable future.
Finally, California may have been the first state to enact these regulations, but it will not be the last. There are several other states who have been considering their own regulations, including New York, Pennsylvania, and Georgia. What ultimately gets proposed as legislation in 2025 in these other states remains to be seen, but what we do know is that health care organizations will now have to contend with potentially competing standards, statutes, and regulations across multiple jurisdictions. While this construct is not new to compliance teams who have dealt with similar issues in the cybersecurity space, it will present some challenges as companies will need to know and comply with the separate AI regulations as they conduct business across the various jurisdictions.
About Maynard Nexsen
Maynard Nexsen is a full-service law firm with more than 550 attorneys in 24 offices from coast to coast across the United States. Maynard Nexsen formed in 2023 when two successful, client-centered firms combined to form a powerful national team. Maynard Nexsen’s list of clients spans a wide range of industry sectors and includes both public and private companies.