The well-known AI policy report commissioned by California Gov. Gavin Newsom set the stage for potential new AI regulations that could immediately affect employment processes, workplace surveillance and AI fuel decisions. The March 18 draft report will be published for feedback until April 8 and may be revised before finalisation, but the recommendations have already formed legislative bills, including the proposed AI Safety Bill (SB 53) that can introduce new AI-related compliance and disclosure obligations. AI regulations are coming to California soon, and employers must prepare for this new wave. What are the four biggest points from this report? Also, what should you do about them?
Four biggest AI policy proposals that could impact employers
You can read the entire 41-page report from AI Frontier Models’ Joint Policy Working Group here, but we’ve made the following simplified four biggest policy proposals that will affect workplaces:
Essential AI risk assessments and third-party audits
This report strongly emphasizes independent third-party AI safety ratings to prevent potential harm.
What does this mean for an employer:
Companies that use AI for employment, promotion, performance reviews, and termination may be needed immediately to conduct a formal risk assessment. Companies may need to engage third-party auditors to ensure that AI tools do not implement bias, privacy risks, or unfair employment practices. Systems equipped with AI must demonstrate compliance with risk mitigation protocols to avoid liability.
Transparency requirements for AI development and deployment
California policymakers are increasingly focused on enforcing AI companies and employers to disclose the capabilities of the AI model, the data they use, and how decisions are made.
What does this mean for an employer:
HR and compliance teams may need to immediately explain how AI-driven employment and workplace decisions are made. AI developers and deployers may need to disclose the data sources behind AI models and not rely on biased or illegally retrieved information. Workplace tools with AI may need to include explanatory features that clarify how to reach your decision.
Action Step:
AI vendors should provide documentation on data source training, bias mitigation, and model accuracy.
Establish internal policies for AI employee notifications and explanations of AI-driven decisions (e.g. employment, performance assessment).
Implement data governance standards to track the source and legal compliance of the AI training dataset.
Monitoring AI whistleblower protection and compliance
The report advocates for stronger legal protections for employees exposing AI-related risks. This means that businesses could face new debt when retaliating against workers reporting AI-related issues.
What does this mean for an employer:
AI whistleblowers may be protected under extended labor laws, as well as those that cover workplace safety violations. Employers may face penalties for failing to investigate AI-related complaints. Internal compliance teams should update their whistleblower policies to incorporate AI concerns.
Action Step:
Consider updating your whistleblowers and compliance policies to explicitly cover your ai-related concerns.
Train HR and legal teams to handle AI-related employee complaints without retaliation.
Establish internal AI surveillance mechanisms to identify and mitigate risks before escalating into celly conflicts.
Reporting adverse events and disclosure of AI incidents
The report calls for a mandatory reporting system that requires businesses to disclose AI-related disability, discrimination, or harm.
What does this mean for an employer:
If AI causes harm (such as biased employment decisions or data breaches), employers may need to report the incident immediately. Companies using AI in workforce management may face more stringent documentation and reporting requirements. Regulators may impose penalties for failing to disclose known AI risks.
Action Step:
We recommend developing an incident response protocol for AI-related risks and harms.
Keep a detailed record of AI-driven decisions, including hiring and performance assessments.
AI Assign AI Compliance Personnel or designate members of the legal team to oversee AI reporting obligations.
What’s next?
Again, this draft report was prepared to seek feedback from stakeholders, including employers. Your organization can submit comments via the online form by April 8th. The Joint California Policy Working Group on the AIFrontier Model will review all comments and incorporate them into the expected final report by June 2025.
What else is the brewing?
Meanwhile, some parts of AI-related law go through the state’s legislative process, including:
Congressional Bill 1018 aims to regulate AI decision-making tools in employment and other key areas, and places strict surveillance on the Automated Decision System (AD) to prevent discrimination in the workplace and elsewhere. You can read about this bill here. The “No Boss in Robo” Act (Senate Bill 7) seeks to regulate the use of advertising in employment, hoping to strictly limit AI-driven tools at the time of employment, promotion, disciplinary and termination of workers. You can read all about this law here. Senate Bill 53 is built on failed legislative efforts (you can read here) by introducing protections for whistleblowers for AI workers, increasing the transparency requirements of the AI model, and potentially mandated to mandate an independent risk assessment to ensure that AI systems do not cause important social or workplace harm.
What should I do?
More than half of the world’s top AI companies headquartered in California, state regulations are likely to affect laws from other states. Even companies operating outside of California need to monitor these developments. Because similar laws could arise soon. Join the Legal and HR team to consider the above recommendations.
If you need help formulating comments on your response to this report, consider contacting the FP advocacy team to shape your statement and hear your voice.
Conclusion
We will continue to monitor development and provide guaranteed updates, so subscribe to Fisher Phillips’ Insight System to gather up-to-date information about AI and workplaces. If you have questions about the impact of these developments and how they affect your practice, contact Fisher-Philips attorneys, the author of this insight, the California office attorney, or the attorney in the AI, Data and Analytics Practice Group.