HHS Releases Guidance on “Artificial Intelligence” Amid the Biden Administration’s Push to Manage AI Risks
The discussion surrounding artificial intelligence (“AI”), including the ethics and confidentiality issues behind such technology, continues to garner attention. This includes discussion within the health insurance industry and among healthcare providers. In December, the Office of the National Coordinator for Health Information Technology (“ONC”), part of the Department of Health and Human Services (“HHS”), finalized rules related to AI, the “Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing Rule” or “HTI-1” (referred to herein as the “Final Rule”). Among other things, the Final Rule intends to increase algorithm transparency for predictive AI in electronic health records used by hospitals and physician offices.
The Final Rule establishes certain new requirements for AI and other similarly predictive algorithms and technology that are part of ONC-certified health information technology (“HIT”). ONC-certified HIT is currently used in the care provided by 96% of hospitals and 78% of office-based physicians around the country. The Final Rule’s new standards aim to increase transparency and promote fairness in the decision-making process by requiring developers of ONC-certified HIT to ensure their software satisfies certain requirements. If the developers can meet this criteria, including providing certain disclosures (i.e., the purpose of the software/algorithm; funding sources; criteria used in the training data set; the process used to ensure fairness; and any external validation process), they will receive certain incentives.
Notably, the Final Rule does not apply to employer plan sponsors or group health plans; however, the Final Rule does have potential implications on healthcare providers who use AI and provides a preview of how HHS may eventually regulate the use of AI by health plans, or within the health insurance industry. The Centers for Medicare and Medicaid Services (“CMS”) has already stated that it is considering how health plans and providers use algorithms to identify high-risk patients in order to manage costs. Per CMS, this practice can create negative bias and restrict the delivery of needed health care services to certain patients depending on health status, among other factors. Moreover, CMS has noted that certain prior authorization policies and procedures may disproportionately impact underserved populations and may prevent access to certain services for certain patient groups. In order to combat these possibilities, CMS is requiring Medicare Advantage organizations to make any medical necessity determinations based on the circumstances of the specific individual, without the use of an algorithm or similar software that fails to consider an individual’s specific circumstances. To the extent the federal government implements similar regulation on private group health plans, the practical impact will depend on the specific operation and administration of a group health plan, including its funding method and involvement of any insurer and/or third-party administrator. Any regulation to this effect will create significant compliance obligations on a plan sponsor, who generally maintains the ultimate compliance and fiduciary responsibility for the plan.
HHS’s and CMS’s efforts in this regard are a part of the Biden Administration’s greater AI strategy. Also in December, the Biden Administration announced its commitment to ensure AI is used appropriately in the healthcare industry. As part of this, the Administration obtained voluntary commitments from 28 healthcare providers and payer organizations to develop, purchase, and implement AI-enabled technology for their own use in health care activities in a responsible way. This includes, for example: CVS Health, Boston Children’s Hospital, Emory HealthCare, Premera Blue Cross, Houston Methodist, Oscar, and UC Davis Health. As part of this commitment, these organizations have agreed to: (i) develop AI solutions to optimize healthcare delivery and payment by advancing health equity, expanding access, making healthcare more affordable, improving outcomes through more coordinated care, improving patient experience, and reducing clinician burnout; (ii) working with their peers and partners to ensure outcomes are aligned with fair, appropriate, valid, effective, and safe AI principles, as established and referenced in the Final Rule; and (iii) deploying trust mechanisms that inform users if content is largely AI-generated and not reviewed or edited by a human, among other things. Additionally, the Office for Civil Rights (“OCR”) proposed a rule to provide that federal civil rights laws under Section 1557 of the Affordable Care Act prohibit discrimination in health programs and activities, including in the use of clinical algorithms in patient care, which may create a bias against certain conditions or patients.
The Final Rule, along with the Biden Administration’s efforts to address AI in the healthcare industry, underscore the importance for group health plans and their plan sponsors to begin efforts to understand the role AI plays in their benefits programs and the administration thereof. Plan sponsors should continue to follow AI-related legal developments, particularly any pronouncements or regulatory proposals coming from the federal government, to ensure that they will be prepared to comply with any laws or guidance applicable to their benefit plans.
About Maynard Nexsen
Maynard Nexsen is a full-service law firm with more than 550 attorneys in 24 offices from coast to coast across the United States. Maynard Nexsen formed in 2023 when two successful, client-centered firms combined to form a powerful national team. Maynard Nexsen’s list of clients spans a wide range of industry sectors and includes both public and private companies.