Fiduciaries should be aware of recent developments related to AI, including emerging and recent state law changes, increased state and federal interest in AI regulation, and the role of AI in ERISA litigation. As we’ve previously discussed here, there has been a lot of focus on the impact of AI on retirement planning, but plan fiduciaries of all types, including health insurance and benefit plans, are also keeping up to date with recent developments in AI. You need to have the latest information.
Recent state law changes
Recently, many states have codified new laws focused on AI, some of which regulate employers’ HR decision-making processes. The main examples are:
California – In 2024, California enacted more than 10 AI-related laws addressing topics such as:
Using AI on datasets containing names, addresses, or biometric data. How to use AI to communicate medical information to patients. AI-driven decision-making in healthcare and prior authorization.
For more information on California’s new AI laws, see Foley’s client alert, Decoding California’s Recent AI Laws.
Illinois – Illinois has passed a bill that prohibits employers from using AI in employment practices in a way that has a discriminatory effect, regardless of intent. By law, employers are required to notify employees and applicants when using AI for workplace-related purposes.
For more information about Illinois’ new AI law, see Foley’s client alert, Illinois enacts law to protect against discriminatory implications of AI in employment activities.
Colorado – Effective February 1, 2026, the Colorado Artificial Intelligence Act (CAIA) requires employers to use “reasonable care” when using AI for certain uses.
For more information on Colorado’s new AI law, see Foley’s client alert, Regulation of Artificial Intelligence in Employment Decision Making: 2025 Outlook.
Although these laws are not specifically targeted at employee benefit plans, they reflect a trend for states to broadly regulate human resources practices, are intended to regulate the human resources decision-making process, and are subject to evolving It has become part of the regulatory environment. Hundreds of additional state bills, along with AI-related executive orders, have been proposed in 2024, hinting at further regulation in 2025. Questions remain about how these laws intersect with employee benefit plans and whether federal ERISA preemption applies to state regulatory efforts.
Recent federal actions
The federal government recently issued guidance aimed at preventing discrimination in the provision of certain health care services and completed a Request for Information (RFI) regarding AI regulations that may be relevant to the financial services industry.
U.S. Department of Health and Human Services (HHS) Civil Rights AI Nondiscrimination Guidance – HHS recently issued a “Dear Colleague” letter through the Office for Civil Rights (OCR) entitled Ensuring Nondiscrimination Through the Use of Artificial Intelligence and Other Emerging Technologies has been published. This guidance emphasizes the importance of ensuring that the use of AI and other decision support tools in health care complies with federal antidiscrimination laws, particularly under Section 1557 of the Affordable Care Act (Section 1557). I’m emphasizing.
Section 1557 prohibits discrimination on the basis of race, color, national origin, sex, age, or disability in health programs or activities receiving federal financial assistance. OCR’s guidance emphasizes that health care providers, health plans, and other covered entities cannot use AI tools in ways that discriminatoryly impact patients. This includes decisions about diagnosis, treatment, and resource allocation. Employers and plan sponsors should note that although this guidance applies to some health plans, including those falling under Section 1557, it does not apply to all employer-sponsored health plans. There is.
Treasury Issues RFI on AI Regulation – In 2024, the U.S. Treasury Department issued an RFI on the Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector. The RFI includes several important considerations, including addressing AI bias and discrimination, consumer protection and data privacy, and risks to third-party users of AI. Although the RFI does not yet result in specific regulation, it highlights the federal government’s focus on the impact of AI on financial services and employee benefits services. The ERISA Industry Council, a nonprofit representing large U.S. employers as sponsors of employee benefit plans, says AI is already being used in retirement preparation applications, chatbots, portfolio management, trade execution, and wellness programs. I commented that there is. Future regulations may target these and related areas.
ERISA litigation powered by AI
Potential ERISA claims against plan sponsors and fiduciaries are being identified using AI. To give just one example, the AI platform Darrow AI claims:
“Designed to simplify the analysis of large amounts of data from planning documents, regulatory filings, and litigation. Our technology accurately identifies discrepancies, breaches of fiduciary duty, and other ERISA violations. With our advanced analytics, you can quickly identify potential claims, assess their financial impact, and build a strong case for retirement and health benefits. We can effectively advocate for employees seeking justice.”
Additionally, this AI platform analyzes diverse data sources, including news, SEC filings, social networks, academic papers, and other third-party sources to support a wide variety of It claims to be able to discover violations that affect employers.
Notably, health and welfare benefit plans have also emerged as a focus area for AI-powered ERISA litigation. AI tools are used to analyze claims data, provider networks, and administrative decisions, potentially identifying discriminatory practices and inconsistencies in benefit decisions. For example, AI can highlight patterns of bias in prior authorizations or inconsistencies in how mental health parity laws are applied.
As these tools become increasingly sophisticated, fiduciaries need to consider the possibility that potential claimants may use AI to scrutinize their decisions and plan their affairs with unprecedented precision. Therefore, the risk is increasing.
Next steps for trustees
To navigate this evolving landscape, fiduciaries must take proactive steps to manage AI-related risks while leveraging the benefits of the following technologies:
Evaluate AI tools: Conduct a formal evaluation of artificial intelligence tools used for plan management, participant engagement, and compliance. This assessment includes an examination of relevant algorithms, data sources, and decision-making processes. This includes evaluations to ensure that your products have been evaluated for compliance with anti-discrimination standards and are not erroneously producing biased results. Service provider audit: Conduct a comprehensive audit of your plan service providers to assess their use of AI. Require detailed disclosures about AI systems in operation, focusing on how they reduce bias, ensure data security, and comply with applicable regulations. Policy reviews and updates: Develop or revise internal policies and governance frameworks, and monitor the use of AI in operational planning and compliance with anti-discrimination laws. These policies should outline guidelines for the deployment, monitoring, and compliance of AI technologies, thereby ensuring alignment with fiduciary responsibilities. Enhanced risk mitigation:
Fiduciary liability insurance: Consider obtaining or enhancing fiduciary liability insurance to address potential claims arising from the use of AI. Data privacy and security: We will strengthen data privacy and security measures to protect sensitive participant information processed by our AI tools. Mitigating bias: Establish procedures to regularly test and verify AI tools for bias to ensure compliance with anti-discrimination laws. Integrate AI considerations into your request for proposal (RFP): When selecting a vendor, include specific AI-related criteria in your RFP. This may require vendors to demonstrate or certify compliance with state and federal regulations and follow industry best practices regarding the use of AI. Monitor legal and regulatory developments: Stay informed about new state and federal AI regulations, as well as developments in case law related to AI and ERISA litigation. Establish a process for periodic legal reviews to assess how these developments impact plan operations. Provide training: Educate fiduciaries, administrators, and related staff about the potential risks and benefits of AI in plan management, new technology, and the importance of complying with applicable laws. Training should provide an overview of legal obligations, best practices for AI implementation, and strategies to reduce risk. Documentation due diligence: Maintain comprehensive documentation of all steps to evaluate and track AI tools. This includes recording audits, vendor communications, and internal policy updates. Clear documentation serves as an important defense in the event of a lawsuit. Assessing the Applicability of Section 1557 to Your Plan: As a health and welfare plan fiduciary, you should assess whether your organization’s health plan is subject to Section 1557 and whether the OCR’s guidance applies directly to your operations. If it does not apply, the reasons why it does not apply must be determined and documented.
Fiduciaries must remain vigilant about the growing role of AI in employee benefit plans, especially amid regulatory uncertainty. Being proactive and adopting a robust risk management strategy can help reduce risk and ensure compliance with current and anticipated legal standards. By focusing on diligence and transparency, fiduciaries can leverage the benefits of AI while protecting the interests of plan participants. Foley & Lardner LLP has experts in AI, retirement planning, cybersecurity, labor and employment, finance, fintech, regulatory issues, healthcare, and ERISA. They regularly advise fiduciaries on the potential risks and liabilities associated with these and other AI-related issues.