At last week’s FP AI conference, Rep. Jay Obernolte aimed to expose two misconceptions about artificial intelligence. The first is that AI is rarely regulated. Second, countless new laws need to be passed in response to the rise of AI use. Whether or not you use AI to do it, it is already illegal to steal people’s money — as it is already illegal to make race-based employment decisions. He and other federal leaders, who are promoting the newly released American AI Action Plan, aim to protect people from harmful and malicious use of AI by focusing on the consequences rather than regulating the tools used. Below are five points that House Rep. Obernolte shared about his vision for AI regulation, as well as five tips from leaders in the FP’s AI, Data and Analytics Practice Group.
From left: Dave Walton, co-chair of FP’s AI Practice Group. Rep. Jay Obernolte (R-CA); ERICA GIND, Vice-Chairman of FP’s AI Practice Group
Five insights into the future of AI regulations
Rep. Jay Obernolte (R-CA) is clearly excited about the future of AI, and his background shows his commitment. He is a computer engineer and video game developer and is the only member of Congress to graduate school in artificial intelligence. He is also co-chair of the Bipartisan House Task Force on Artificial Intelligence. Below are five important ways he said the federal government is approaching AI regulations.
1. Relying on federal agency knowledge: Rep. Obernolte pointed out that sector regulators are best suited to deal with issues arising in their field of expertise. For example, the Equal Employment Opportunity Commission (EEOC) can assess the risk of AI in regards to employment discrimination in employment tools. The Occupational Safety and Health Administration (OSHA) is best positioned to address AI risks related to workplace safety, including how monitoring systems are used for manufacturing. His view: It is better to teach the AI sector about AI than to teach the AI sector about workplace anti-discrimination and safety compliance.
2. Take a “hub-and-spoke” approach: Certain AI technologies can be at risk for one use, but may not be another. For example, the FDA oversees the safety of medical devices. This is a high-risk area where AI tools can have a different impact than low-risk areas. For example, diagnostic tools may be fine for workplace wellness apps, but they are not suitable for cancer diagnosis. The hub-and-spoke model allows agents (spokes) to take a different approach depending on how the technology is used.
3. Protect people from malicious AI use: Law enforcement must have tools to combat malicious AI use with cyber fraud and theft. Rep. Obernolte pointed out that AI presents new ways to commit crimes, but there is no need to create new laws about illegal things. Fraud and theft are already illegal, and the shift lies in how these crimes are being done and how to protect people from them.
4. Avoiding the patchwork of law: As more states consider AI laws, we risk creating a Vulcanized approach to 50 state regulations. Rep. Obernolte fears this will curb innovation and entrepreneurship. “Congress needs to be clear where the interstate commerce guardrail is and whether the state is always the institute of democracy they have been,” he said.
5. Encourage bipartisan support. AI regulations are not a partisan issue, allowing Congress to take action quickly. The lawmaker said he is confident we will see bipartisan actions, emphasizing that his vision for AI regulation is to prevent malicious use while encouraging innovation and entrepreneurship.
Rep. Jay Obernolte (R-CA) will speak at the FP AI conference
5 Tips for Employers
As the federal government continues to form rules on the use of AI, employers hope to go ahead of the curve and work proactively on policy and compliance efforts in the workplace. We asked AI, Data and Analytics Practice Group Leaders (given by Dave Walton and Erica) to share our top five tips.
1. Creating an AI Governance Plan: Governance is about building a process, following it, and documenting it. Click here for important steps to ensure that AI technology meets not only the company’s values and customer expectations, but also the legal standards that arise when courts and government investigators are adopting them.
2. Protect your business from AI hallucinations and deepfakes: AI “hatsui” is a situation in which generative AI generates false or blatantly false information that sounds too realistic, and your employees may mistakenly resort to this false information. AI deepfake tools are more intentional. It is used by cybercriminals to forge identity and infiltrate organizations. Both can cause serious damage to the companies involved. Below are a few steps you can take to protect your business from AI hallucinations, as well as 10 things you can do to avoid falling into deepfake scams.
3. Tracking trends in litigation: With lawsuits surrounding the use of AI in the workplace appearing around the country, the ultimate court decision will certainly affect employers’ policies and practices. Here are some issues to track:
4. White House AI Action Plan Review: The Trump Administration plan, just released on July 23rd, identifies more than 90 federal policy goals aimed at creating a roadmap for achieving “global AI domination” in innovation, infrastructure, and international diplomacy and security. This has a major impact not only on AI developers and the high-tech sector, but also on many employers and employees across the US workforce. Find out more about the American AI Action Plan and takeout from our top 10 employers.
5. With state regulations: State lawmakers aren’t waiting for Congress to intervene. From aggressive regulations to proposed legislation, states are moving forward at full speed to define how they can and should be used in AI technologies, particularly in employment and employment. Here’s a summary of what you should track at the state level:

