This is the second article in the AI In the Workplace series, which explores the rapidly evolving landscape of artificial intelligence and its impact on employers. Part 1 examined workers’ surveillance and FCRA responsibilities. Currently, Part 2 shifts the focus to the broader legal uncertainty surrounding AI adoption in employment practices.
Artificial intelligence (“AI”) is moving faster than the law can keep up. The competition to adopt AI has created a new Wild West. It’s high risk, high rewards, and there are few rules. As AI continues to redefine industry and workplaces, the laws surrounding the use of AI remain surprisingly underdeveloped. Without clear federal guidance on how AI should be used in employment practices, states have begun to step in, but the resulting policy framework leaves employers with consistent standards to follow. Pennsylvania Governor Josh Shapiro is looking to position Pennsylvania as a leader in AI innovation, but his recent initiative still leaves a huge gap for employers navigating this new terrain. The Digital Gold Rush feels like a legal minefield as employers try to continue to comply.
Pennsylvania’s driving innovation
AI unlocks exciting and innovative opportunities for employers, and Pennsylvania Governor Josh Shapiro is guiding the Federation to embrace 21st century technology.
Recently on March 21, Governor Shapiro joined the federal Labour leadership to publish positive findings on the general AI pilot program. This is the first program of its kind. Launched in early 2024, the program showed that state workers saved an average of 95 minutes a day using ChatGPT on tasks such as writing, research, abstracting and IT support.
Governor Shapiro has launched a drive for responsible AI integration with the 2023 executive order establishing core values for the responsible ethical use of generated AI in state agencies. The order mandated a governance structure to ensure transparency and bias testing and created a Generation AI Management Committee. To further promote these efforts, Governor Shapiro came to an agreement with Innovateus at the end of 2024 to train state employees on the ethical use of AI.
Despite the executive push, Pennsylvania’s legislative progress has been slower. House Bill 594 (“HB 594”) is the only bill pending by the Pennsylvania Labor and Labor Commission seeking to establish a framework for regulating AI in the workplace. HB 594 amends Pennsylvania’s Relations Act to require employers to:
Notify applicants of their use of AI prior to employment interviews, and with the applicant’s consent, conduct a regular bias audit of the AI tool.
Although HB 594 has been introduced, it has not yet advanced from the committee and does not delegate any guidance or law to Pennsylvania employers on how to ethically and effectively implement AI tools in the workplace.
Federal and State Movements: A Big Picture
The movement is occurring on the national stage. President Trump recently signed an executive order focusing on “removing barriers” to help the United States become a global AI leader. The order was called on to develop an AI action plan to define priority policy measures and formed the national regulatory framework by February 25th, by March 15th, by which public comments from stakeholders were invited. A more relaxed regulatory approach is expected at the federal level, reflecting the Trump administration’s focus on innovation and economic growth. In particular, the US Equal Employment Opportunity Committee (“EEOC”) and the Department of Labor have withdrawn previous guidance on AI and workplace discrimination, with relevant materials being removed from the EEOC website.
However, this does not remove the employer’s obligation to comply with existing labor and anti-discrimination laws when using AI technology. Therefore, some states are already moving forward with strict surveillance. for example:
In Colorado and Illinois, laws have been enacted to monitor the use of AI in employment contexts with the goal of protecting against discrimination. New York, Texas and Virginia have introduced legislation to establish a legal framework for the use of AI tools in the workplace. In California, Senate Bill 7, known as the “Robo Boss-Free Act,” imposes strict restrictions on automated decision tools, requiring major employment decisions to involve human reviewers, and prohibits certain uses of automated decision-making systems (“advertising”) in the workplace if adopted.
Ethical dilemmas and legal liability that all employers may face
While they are strongly pushing for the adoption of innovative AI tools in the workplace, the ethical dilemma they present will affect employers across the country, especially as concerns such as AI-related discrimination claims come to the forefront.
Pennsylvania’s neighbourhood, New Jersey, offers an important example of positive governance in this evolving field of law. In January 2025, the New Jersey Attorney General and the Civil Rights Division (“DCR”) issued guidance to clarify how New Jersey law (“NJLAD”) applies to “algorithmic discrimination,” which is discrimination caused by the use of automated tools. Automated tools can be used to help employers make business decisions, such as deciding to hire, fire, or receive promotion. This guidance outlines how discriminatory outcomes can emerge from the way AI tools are designed, trained, or deployed. In particular, this guidance reveals that even if employers do not understand the tool and third parties develop the tool, employers can still be held responsible for discrimination in the algorithm.
What does this mean for Pennsylvania employers?
Efforts are underway to advance AI laws in surrounding states, but the path ahead for Pennsylvania employers is still uncertain. The Pennsylvania approach has a profound impact on employers, whether it leaps towards comprehensive safeguards or loose surveillance. Until then, Pennsylvania employers are left in the dark to navigate this unregulated area of law. Therefore, employers should be careful when implementing automated decision-making tools and AI in the workplace.
Employer action items to mitigate risk
With federal law shortages and Pennsylvania has yet to implement a legal framework, proactive steps can be taken to protect businesses and reduce exposure to liability.
1. Developing Internal AI Policy – Establish clear internal policies that govern the use of AI and the use of automated decision tools. This includes defining who oversees the implementation, how the tools are chosen, and what safeguards are in place.
2. Understand the tools you use – Do your research to understand the design, features, and data used to train the tools.
3. Auditing bias and discrimination – Periodically audit AI tools for bias or discrimination, especially when used in employment practices.
4. Continue to supply information – What is legally acceptable today may be held responsible tomorrow. Keeping up with proposed laws, regulatory guidance and legal trends is essential to staying compliant with your business to avoid liability during this digital gold rush.