U.S. Senators Shelley Moore Capito and Amy Klobuchar have moved to combat one of the fastest growing consumer threats in the age of generative AI by introducing the bipartisan Artificial Intelligence Fraud Prevention Act.
The bill squarely targets AI-powered identity fraud, which uses cloned voices, synthetic images, and faked video calls to trick victims into sending money or divulging sensitive personal information.
If passed, this bill would be one of the most direct federal responses to consumer harm caused by generative AI to date.
Rather than regulating how AI systems are built, the bill focuses on how AI systems can be misused, treating synthetic identity theft as an evolution of traditional fraud rather than an entirely new category of crime.
Lawmakers supporting the bill say the distinction is intentional and necessary as AI tools are rapidly becoming part of everyday communication.
The bill comes amid mounting evidence that AI-based fraud is accelerating faster than existing consumer protection laws can handle.
While exact numbers that specifically identify AI-powered identity fraud are not yet available, multiple reliable estimates and data points indicate that the financial losses associated with AI-powered fraud overall are already substantial and growing rapidly, including a significant contribution from impersonation tactics.
The Federal Bureau of Investigation (FBI) has reported millions of fraud complaints totaling more than $50 billion in losses since 2020, with an increasing proportion attributed to deepfakes and synthetic identity schemes.
According to recent data from the Federal Trade Commission (FTC), Americans lost nearly $2 billion last year to scams initiated via phone calls, text messages, and emails, with phone-based scams accounting for the highest losses per victim.
According to recent research and industry forecasts, fraud losses enabled or amplified by generative AI could reach approximately $40 billion in the U.S. by 2027, up from approximately $12 billion in 2023, reflecting a compound annual growth rate of more than 30 percent as criminals deploy AI to create more convincing scams and evade traditional defenses.
Research shows that when individuals fall victim to AI voice cloning scams, the majority report financial losses, with many victims losing hundreds to thousands of dollars, and a smaller percentage incurring five-digit losses.
Regulators and consumer advocates argue that generative AI has greatly enhanced these schemes, allowing criminals to convincingly imitate family members, bank representatives, government officials, and even executives at large corporations.
The Artificial Intelligence Fraud Act aims to close what lawmakers say is a widening legal gap. The core of the bill is to explicitly make it illegal to use AI to reproduce a person’s voice or image for fraudulent purposes.
“Artificial intelligence has made scams more sophisticated, making it easier for scammers to trick people, especially seniors and children, into handing over their personal information and hard-earned money,” Klobuchar said. “Our bipartisan bill will help combat fraudsters who use AI to copy someone’s voice or image.”
“Artificial intelligence has incredible potential, but we also need to be vigilant to prevent harmful uses of the technology, especially when it comes to fraud and fraud,” Capito added.
Although identity fraud is already illegal, Klobuchar and Capito argue that many statutes still rely on outdated definitions written decades before synthetic media existed.
By explicitly covering AI-generated audio, images, prerecorded messages, text messages, and video conference calls, the bill is designed to allow prosecutors and regulators to act without stretching analog-era laws to accommodate digital fraud.
A central feature of the bill is the formal creation of an interagency advisory committee on AI-based fraud.
The commission would be responsible for coordinating enforcement and information sharing among agencies such as the FTC, the Federal Communications Commission, and the Treasury Department, which oversees financial crimes and sanctions enforcement.
Coordination is essential, Klobuchar and Capito said, given that AI fraud often spans communication networks, online platforms and financial systems simultaneously.
The bill would also codify the FTC’s existing prohibitions on impersonating government agencies or legitimate businesses and codify agency rules into law.
Supporters argue that the change would give the FTC more power to impose civil penalties and seek restitution from victims, rather than relying primarily on injunctive relief.
The bill would also update the Telemarketing and Consumer Fraud and Abuse Act and the Communications Act of 1934, neither of which had been significantly revised to reflect modern communications technology since the 1990s.
Consumer protection authorities have been warning for months that AI-powered scams are becoming more convincing and harder to detect. The FTC and FBI reported a surge in so-called family emergency scams in which criminals use short audio clips collected from social media to generate near-perfect voice clones.
Victims are often pressured to act quickly, believing they are helping a child or relative in immediate danger. Wire fraud schemes targeting finance departments use similar techniques to impersonate corporate executives.
Reaction to the bill has been largely positive among consumer advocacy groups and financial institutions that have faced the brunt of AI-based fraud.
Banking groups have repeatedly called on Congress to establish clear federal standards, rather than leaving agencies to deal with a patchwork of state laws and voluntary guidelines.
Supporters argue that the bill sidesteps the broader legal uses of AI for satire, accessibility, entertainment, or artistic expression by focusing on deceptive intent rather than simply creating synthetic media.
Naturally, technology companies are watching closely. Major platforms have introduced their own defenses in recent months, including call screening tools, fraud detection algorithms and origin signals for AI-generated content.
Still, industry groups warn that law enforcement alone will not deter foreign actors operating beyond U.S. jurisdiction.
Some have called for the bill’s advisory committee to prioritize international cooperation and information sharing, especially as AI models capable of producing realistic audio and video clones are becoming smaller and easier to run locally.
Meanwhile, privacy advocates are urging lawmakers to ensure that anti-fraud efforts don’t covertly expand surveillance of private communications. They warn that the pressure to detect AI fraud could conflict with encryption and user privacy protections if not carefully limited.
Although the bill itself does not mandate new oversight requirements, critics say its actual impact will depend largely on how regulators implement and enforce its provisions.
The anti-fraud proposal highlights a broader shift in Washington’s approach to AI as Congress heads into 2026 with multiple AI bills still considered.
After years of abstract discussions about future risks, lawmakers are increasingly responding to the concrete, measurable damage already hitting consumers’ phones, inboxes, and bank accounts.
Whether the new framework can keep up with the speed and adaptability of AI-driven fraud remains an open question, but proponents argue that failure to modernize the law will put Americans at further risk in a world where hearing a familiar voice can no longer prove who is truly at risk.
Article topics
AI Fraud | Deepfake Detection | Digital Identity | Financial Crime | Financial Services | Fraud Prevention | Generated AI | Law | US Government

