Of course, employers need to understand how AI can be used for HR functions such as employment, monitoring, and performance assessment. Helping the organization maintain its competitiveness. Legal risks related to AI use. Employees need to be aware of organizational standards regarding AI use, how to avoid creating data security and privacy risks, and understand how to use AI in compliance with employer policies and practices.
Regulation background
As reported here and here, AI-specific regulations are relatively limited in states such as California, Colorado and Virginia. Most of the new laws are directed towards amending or supplementing existing laws. While most of the law prevents “high-risk systems” and “algorithm discrimination,” the breadth and complexity of the issues have made it extremely challenging for policymakers to reach consensus on how to regulate AI.
At the federal level, Congress shows no indication that it will pursue drastic restrictions on AI. But just two weeks ago, what was hidden in the U.S. House of Representatives Energy and Commerce Committee’s budget adjustment markup was a provision that prohibits states from enforcing laws or regulations related to AI for ten years since its inception. The moratorium passed the House on May 22nd, but there were minor changes aimed at ensuring that the provisions focused on interstate commerce. It is not clear that a moratorium will pass in the Senate, but if it is enacted, it is almost certain that it will face challenges immediately with both interstate commerce and the 10th Amendment. Therefore, Congress appears willing to somehow influence federal government policies on AI, and employers should continue to monitor development.
In the employment context, many of the regulations that exist are related to the use of AI in the recruitment and employment process. One of the first laws to regulate the use of AI is Local Law 144 in New York City, and since its enactment in 2023, many states have imposed restrictions on the use of AI for employment. State law generally focuses on transparency and avoidance of discrimination. Companies covered by the law generally need to provide specific disclosures regarding the use of AI and obtain prior consent from employees or applicants regarding the use of AI in employment.
AI is an employer-specific challenge
The rise of remote work has created new challenges primarily related to workforce monitoring. Employers using Activity Trackers should ensure they do not violate AI or data privacy regulations. AI and data privacy issues also arise when physical badges and technologies are used to monitor employee movements in preparing performance assessments and creating employment decisions.
In addition to employee privacy, employers should be concerned about their privacy, namely the confidentiality of the company’s own confidentiality or the confidentiality of HR information, including employee health information. Employers should train employees on the importance of avoiding inappropriate disclosure when using AI.
Employers should consider the possibility that using AI to screen applicants could result in discrimination claims. The algorithm may be based on outdated data that can perpetuate discriminatory employment decisions. For example, algorithms may be based on data from a time when employees in a particular occupation were primarily white or male. In that case, the algorithm may select applicants who are white males based on this historical data. This could result in discrimination claims by women or minority applicants who were screened before employers realized their presence. Screening of AI applicants is said to eliminate older applicants and those with disabilities.
For current employees, AI monitoring tools may not consider reasonable accommodation. For example, employers may use AI tools to monitor and track employee productivity, and the tool flag flags flag atypical breaks, but are not programmed to consider accommodation. As a result, the tool can flag employees for disciplinary action to take excessive breaks, despite the employee being given additional breaks as a reasonable accommodation for disability. As previously reported, the California Privacy Protection Agency has issued proposed regulations to address this issue. Comments on the proposed regulations are still accepted.
The above shows that employers need to ensure that they do not overly rely on the output of their AI tools. Humans need to ensure that the AI tools are working correctly and verify the results. Otherwise, the employer may be liable under the Anti-Discrimination Act or other Employment Act. If your employer uses AI tools provided by the vendor, you need to understand that both the employer and the vendor can be liable for the violation. Therefore, employers should exercise appropriate oversight and consider including compensation clauses in their contracts with vendors.
Proposals for employers using AI
As employers continue to adopt and integrate AI, there are three basic elements and need to be maintained.
Appoint inter-duty groups with the authority to oversee all AI issues within the organization. This means you have comprehensive control over AI across your business. For example, establish basic rules for AI use by adopting an enterprise-wide “AI Acceptable Use” policy or perhaps developing a comprehensive governance framework for managing and assessing AI compliance based on the NIST AI risk management framework, and providing ongoing training to employees. Create a risk assessment process to assess AI, using both pre- and post-deployment. This should be updated regularly to incorporate changes into AI use, risk tolerance, or regulatory development use.