Artificial Intelligence in the Health and Welfare Space: Opportunities and Considerations for Employers
Artificial intelligence (also commonly referred to as “AI”) is rapidly reshaping industries, and the health and welfare space is no exception. As employers seek innovative ways to improve the administration of health and welfare plans, AI could be a potential tool to optimize operations and reduce costs, but the adoption of AI comes with challenges and ethical considerations, particularly in the areas of the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) and Employee Retirement Income Security Act of 1974 (“ERISA”) as well as the fiduciary responsibilities thereunder.
AI in Health and Welfare
In sum, AI is the simulation of human intelligence by computer systems. In the context of health and welfare plans, AI is most frequently utilized to: (i) automate administrative processes (i.e., claims adjudication, billing, and eligibility verification); (ii) improve participant engagement through virtual 24/7 assistance; (iii) improve decision-making via data sets and prediction of patterns; and (iv) fraud detection. By way of automation of routine tasks, AI streamlines plan administration greatly and can reduce administrative costs by up to 30%. Employee experience can be also enhanced through AI-powered virtual assistance and decrease wait-time in appointment scheduling and claim statuses. Analysis of claims or social data can identify high-risk employees and use these insights for more targeted programs and pricing. For example, AI can spot anomalies in claims data and flag potential failures to ensure compliance. Language processing tools could also simplify plan document generation.
However, while the advantages of AI are compelling, there certain challenges and risks that employers must consider. Some of the top drawbacks are the data security and privacy concerns. Employers should be especially wary of the privacy and security rules within HIPAA, ERISA, and the emerging AI laws. In order for data analyses to be more accurate, AI requires access to large datasets. Generally, health data is highly sensitive, and the protective measures with regard to AI systems are unlikely to be sufficient, which makes HIPAA compliance critical. In addition, if the data is not desensitized of bias, AI systems may enforce disparities in healthcare and thus result in inequitable plan outcomes. Maintenance of AI is also quite complex and expensive; thus, it could be inaccessible to small or mid-sized employers.
Most importantly, the legal framework around AI in healthcare is not entirely clear. The legislation around the use of AI in healthcare is still developing; however, in Compliance Assistance Release No. 2024-01, the Department of Labor’s Employee Benefits Security Administration confirmed that its cybersecurity guidance applies to all employee benefit plans. The courts are just starting to address the issue. For example, in Kisting-Leung v. CIGNA, Corp., the District Court for the Eastern District of California dismissed the participants’ ERISA claims involving wrongful benefit denials using AI-based algorithm called PxDx and failure of disclose the use thereof due to standing but allowed certain fiduciary claims to proceed. With regard to the latter, the participants argued that the insurer’s use of AI contradicted the health plan terms, which required medical necessity review by a medical director, and California law, which requires claims to be reviewed by a licensed health professional (see the Physicians Make Decisions Act). Furthermore, some legislatures are beginning to enact laws prohibiting algorithmic discrimination in AI (e.g., Colorado Artificial Intelligence Act) while others require disclosure to customers when companies use chatbots for client interaction.
Employer Impact
While AI can help us manage complexity in health and welfare plans, employers should anticipatorily set certain guidelines to minimize risk. First and foremost, human oversight is still a crucial part of responsible AI integration. Regular reviews and internal audits are highly recommended. Fiduciary responsibilities under ERISA include the selection and oversight of third-party vendors. As such, employers should carefully vet vendors and only choose AI systems with a strong record in healthcare. Adequate protections can be accomplished by specific language in the service agreements with vendors regarding AI use (or misuse). Given the complexity of AI, employees should be trained accordingly to mitigate the risks, including health data access. Consistency with data governance policies and security protocols goes a long way. As part of HIPAA compliance, employers must ensure that proper Business Associate Agreements (“BAAs”) with vendors supplying AI systems are in place.
Conclusion
Although it is unlikely that AI will entirely replace human oversight (though it is possible), its role in health and welfare plans is expected to expand. Employers must strike a balance between embracing innovation and upholding compliance obligations. Employers who correctly utilize AI systems will benefit significantly so long as they are capable of safeguarding the interests of their workforce.
About Maynard Nexsen
Maynard Nexsen is a full-service law firm of nearly 600 attorneys in 31 locations from coast to coast across the United States. Maynard Nexsen was formed in 2023 when two successful, client-centered firms combined to create a powerful national team. Maynard Nexsen’s list of clients spans a wide range of industry sectors and includes both public and private companies.
Related Capabilities
