AI generated at a glance
AI can be divided into two main categories: “prediction” and/or “generated.” Predictive AI performs statistical analysis to predict results. Genai also assumes predictive elements, but creates something entirely new.
For the purposes of this article, most AI programs are considered for deployment in human resources and workforce management today. A subset of generated AI, LLMS is a learning model that is pre-processed with a huge amount of data sources from countless locations, including the World Wide Web. LLMS takes text input, analyzes them, and provides output based on a pre-guided model. LLM undergoes a learning process known as “monitored learning.” It undergoes a learning process, a category of machine learning that utilizes pre-labeled datasets to train AI to recognize patterns and predict results.
To illustrate, companies may be considering software systems to significantly streamline the human capital spent on candidate reviews and interview selections for open roles from thousands of resume submissions. With Genai, a company can sift through thousands of resumes in seconds based on the criteria specified by the company. The desired outcome (i.e., selection of candidates with a specific skill set or expertise) is called a “target.”
In the growing movement to maximize efficiency in HR management, many companies are considering and using Genai for at least one or more purposes:
Recruitment (i.e., greenhouse, viable) workforce management (i.e., work, bamboohr) performance management (i.e., talent signal by waving) salary (i.e., ripples, charm).
However, the vast universe of data that LLM may depend on (one of its important advantages) can lead to pitfalls if not properly controlled. For example, LLM may predict or output meaningless or completely inaccurate responses (i.e., “hagasms”).
This is why it is important as a tool to generate efficiency through human-driven processes rather than leveraging Genai to make employment decisions. More than ever, companies need to use it from developers and vendors to train models that increase the fidelity of the results using diverse datasets to improve the fidelity of the results.
Legal landscape
Given the relative novelty of the Genai and LLM models, over the past few years, state legislators have been trying to post parameters relating to the development and use of these technologies. This parameter focuses on providing greater transparency to job seekers and employees by addressing bias, placing an obligation on employers to self-audit the Genai technology they are implementing, and requiring them to be notified of their use of Genai in recruitment and workplace management processes. Below are some notable examples.
Colorado: Senate Bill 205 (Eff. 2026). Colorado has enacted the most comprehensive law in the United States by requesting notification to job seekers if AI supports adverse decisions, and (2) support the opportunity for applicants to challenge decisions and seek human reviews of applications. The state attorney general recently issued a statement that the goal of the law is to target “surprising” violations. Illinois: House Bill (HB) 3773 (Eff. 2024). Employers analyzing video interviews using AI must (1) be informed in advance to each employee, (2) be able to assess the nature of AI mechanisms and ratings, and (3) obtain the applicant’s consent. Employers who rely solely on AI analysis should gather and report on job seekers’ races and ethnicities in Illinois commercial and economic opportunities. HB 3773 amends the Illinois Human Rights Act (IHRA) to prohibit employers from using AI in a way that has a discriminatory effect on classes protected under the IHRA. There are also some similar pending invoices in California, Connecticut, Massachusetts, New Jersey, New York, Vermont and Utah. California’s Civil Rights Office proposed that, if approved, it would come into effect either July 1, 2025 or October 1, 2025, and that the lack of anti-bias testing of automated decision-making systems constitutes evidence related to discrimination claims. Perhaps this guidance will be used to support discriminatory claims in the future, using automated processes in workplace management.
At the same time, the Trump administration is moving towards government deregulation of AI development, and in response, it issues Executive Order (EO) 14179, removing the barriers to American leadership in artificial intelligence on January 23, 2025. It promotes US global leadership in AI development. EO 14179 is primarily directed at government agencies, but still shows the Trump administration’s position in favor of the use and development of AI.
The court also sees their considerable activity in the workplace landscape of Genai. For example, Mobleyv. Workday, Inc., 740 F. Supp. 3d 796 (NdCal. 2024), the employment applicant filed a class action lawsuit against Workday, claiming that Workday’s AI-led applicant screening tool was discriminated against based on his race, age and disability. Mobley further alleged that he submitted over 100 job applications to various employers using Workday’s software (Choice Skill Match), each of which was rejected. His theory of responsibility is that Workday’s algorithm led to different influence discrimination. The court has rejected a Labor Day bid to dismiss the case and is currently considering a discussion to conditionally qualify a class action lawsuit. The court recently said “people were “fully denied” from jobs that were generally qualified,” saying one general component was a recommendation for Workday’s machine learning work. The court suggested that if the job is “making something that is effectively a general test, that’s a general question.”
Next Steps
Given the potential risks and landmines associated with implementing Genai Technologies in the HR and workplace management process, companies need to carefully consider and address the following issues when implementing Genai-based HR and employment management software:
Comply with notification requirements. Only a handful of states have passed employment-specific regulations regarding AI, but they will soon follow further. These laws generally have several common principles, including the obligation to warn applicants about the AI systems they are using, what they are looking for, and how their business is using this data. Keep records and implement strict privacy safeguards. Because AI systems rely on large amounts of applicant data, they implement strong data security measures to protect against security breaches and protect the applicant’s confidential personal information. Preview AI results assess the presence of potential biases. Perform a privileged internal review of the AI software used to determine whether bias has been developed and, if so, promptly remove it from the system. This includes reviewing inputs, how technology is subject to monitored learning processes, and defined targets. Understand how your system processes data from an AI vendor. Find out how the consideration or use of AI technology creates standards, controls, and final decisions. For example, if your company is working with a Genai vendor to screen your resume, you can work with the vendor to coordinate the programme to broadly capture candidates, cast a wide range of nets, cast a wide range of nets (perhaps in writing the job posting itself), or provide a wide range of terms to those who appear to be eligible. Work with your attorney to review your vendor agreement. See how vendors want to work together to combat the potential issues that arise when implementing AI technology. Develop your own AI. If business considerations are permitted, develop in-house AI tools to ensure federal, state and local laws and tailor their use to suit your business needs. Provides training on AI systems to staff involved in employment procedures. Make sure staff know what to look for when implementing an AI system and train applicants on how to explain the AI system to them as they ask about the process. Allows individuals with disabilities to request reasonable accommodation. When implementing technology that can litter and affect individuals with disabilities, we offer another easy access option for those requesting accommodation. It is involved in the workforce and meaningfulness. Increased technology, concerns about unemployment, and highly publicized concerns about AI can cause anxiety, which can lead to lower morale. Take your time to measure workplace sentiment related to AI and address employee concerns if business needs are permitted.
McDermott’s cross-practices team is at the forefront of the evolution and continued development of AI, including legal and business impacts. For questions about guidance when navigating the use and implementation of Genai in the workplace to increase employee satisfaction and litigation avoidance, contact this article or the usual author of McDermott Lawyer. Learn about AI Toolkit.