On November 19, the Senate Commerce Committee’s Subcommittee on Consumer Protection, Product Safety, and Data Security held a hearing on “Using Artificial Intelligence to Protect Consumers from Fraud and Fraud.” Witnesses at the hearing testified about how AI technology enables fraud and fraud, while senators from both parties asked questions highlighting the need for federal legislation to crack down on such activity. The hearing comes as senators try to pass AI legislation during this lame-duck session of Congress. Subcommittee Chairman Hickenlooper (D-Colo.) specifically discussed five bipartisan AI bills during the hearing, vowing to “cross the finish line and get them signed into law within weeks.”
On November 19, the Senate Commerce Committee’s Subcommittee on Consumer Protection, Product Safety, and Data Security held a hearing on “Using Artificial Intelligence to Protect Consumers from Fraud and Fraud.” The subcommittee heard from witnesses who testified about how AI technologies and tools enable fraud and deception, while senators from both parties spoke about the need for federal legislation to crack down on such practices. I asked questions that emphasized the need.
The hearing came as senators are trying to pass the AI bill during Congress’ lame-duck session. Subcommittee Chairman Hickenlooper (D-Colo.) discussed five AI bills in particular during the hearing, noting that all have bipartisan support, saying, I vowed to have it enacted within a week.”
opening statement
In his opening remarks, Subcommittee Chairman Hickenlooper acknowledged the many benefits of AI, saying, “For all those benefits, we need to mitigate and anticipate the concurrent risks this technology poses.” “There is,” he said. To this end, he specifically discusses five AI bills (discussed later in this newsletter) that he believes have bipartisan support and calls them “within weeks of crossing the finish line.” “I will make it happen,” he vowed.
Ranking Subcommittee Member Marsha Blackburn (R-TN) focused on AI fraud and fraud and their current widespread impact. She noted that the FTC Consumer Sentinel Network Data Book “found that fraud increased by $1 billion over the past 12 months, reaching $10 billion.” And, of course, we know that AI is driving a lot of that. Senator Blackburn called for a “comprehensive” and “comprehensive” package to combat fraud and fraud caused in part by AI, including “actual online privacy standards that have never been passed before.” ” called for a policy approach.
Both the Chairman’s and Ranking Member’s statements emphasize that AI-based fraud and other harms are a bipartisan concern.
Expert testimony: common concerns and solutions
The following experts testified at the hearing:
Dr. Hany Farid, academic who studies deepfakes and other AI-generated or digitally manipulated images Justin Brookman, director of technology policy at Consumer Reports Munir Ibrahim, chief communications officer and public at digital information provider Truepic Policy Director Credibility Technology Dorota Mani, Mother of AI-Generated Deepfake Victim
Expert testimony and responses to senators’ questions focused on four main themes:
Source of content. Ibrahim and the other panelists said that the origin of content, the metadata attached to content that reveals whether the content was generated by AI, is different from what people think about AI-generated content and AI-generated content. We pointed out that it is currently one of the most promising solutions for making content distinguishable. real content. Senator Hickenlooper asked Mr. Ibrahim about incentives to expand content provenance technology and make it widely available and used. Ibrahim responded that there were “no financial incentives or consequences for these platforms to better protect consumers or at least be more transparent.”
Comprehensive Privacy Law. Both the Subcommittee Chair and Ranking Member noted the need for comprehensive data privacy laws in their remarks. Dr Farid said: “It should be a crime not to have a data privacy law in this country.”
Hold creators of AI content and AI tools accountable. Several senators and panelists spoke about the need to shift the burden from consumers to the companies that create AI content and tools to ensure their content and tools are not misused or harm people. We discussed. Dr. Farid testified. “If you’re an AI company, anyone can clone someone’s voice by simply clicking a box that says, ‘I have permission to use their voice, you are allowed.’ As previously discussed, the AI Research, Innovation, and Accountability Act provides a “framework to hold AI developers accountable” for their content and AI tools. emphasized that it would be created.
Stronger enforcement. “Fraud and fraud are already illegal,” Brookman said, but “we don’t have enough deterrents against potential fraudsters because of insufficient enforcement or the consequences of getting caught.” He called on Congress to give the FTC additional resources to hire staff and expand its legal authority “so it can respond to the threats plaguing the modern economy.”
Laws discussed at public hearings
Five AI bills previously discussed were discussed at the public hearing. None of these bills would constitute the comprehensive data privacy legislation that senators and panelists called for, but they would lay the groundwork for greater transparency for AI developers and provide information on deepfakes and other AI-generated data. It also protects consumers from harmful content.
The Future of Artificial Intelligence Innovation Act of 2024
The Future of AI Innovation Act permanently establishes the Artificial Intelligence Safety Institute, which will create voluntary standards and guidelines for AI and research safety and issues in AI models. The institute will also create a testing program for basic model vendors to test their models “across a variety of modalities.” The bill would also direct the National Institute of Standards and Technology (NIST) and the Department of Energy to establish testbeds to discover new materials for AI systems.
Verification and evaluation of reliable AI methods
The VET Artificial Intelligence Act requires the Director of NIST to develop “voluntary guidelines and specifications” for “internal” and external artificial intelligence assurance. This is an unbiased third-party evaluation of AI models to identify errors in artificial intelligence functionality and testing. Validate the model and verify claims about the model’s functionality.
Artificial Intelligence Research, Innovation, and Accountability Act
Regarding research and innovation, the Artificial Intelligence Research, Innovation, and Accountability Act (AIRIA) directs the Secretary of Commerce to conduct research on “the origin and authentication of the content of human- and AI-generated copyrighted works.” Directs the Auditor General to study “statutory legal regulations.” “Regulatory and Policy Barriers to the Use of AI within the Federal Government.” Regarding accountability, the bill would also create standardized definitions for common AI terms. Transparency requirements regarding the use of AI (disclosure that content is generated by AI), high-impact AI systems, including systems involved in decision-making related to housing, employment, credit, education, health care, insurance, etc. disclosure and reporting requirements).
copy method
The Protection and Integrity of Content from Edited and Deepfaked Media Act (COPIED Act) aims to address the rise of deepfakes. The bill would require federal agencies to develop standards for the detection of AI-generated content, establish AI disclosure requirements for developers and adopters of AI systems, and require federal agencies to identify copyrighted content for training AI models. Instruct to prohibit unauthorized use of.
take it down act
The TAKE IT DOWN Act, known as the TAKE IT DOWN Act, is a tool to combat known exploitation by pinning technical deepfakes on websites and networks, including certain AI-generated deepfakes, that are not consensual. It would criminalize the publication of intimate images and require social media platforms to: The process of removing such content from the platform.
Lame Duck Period: Possibility of AI Activities
Whether there will be enough momentum to get an AI bill across the finish line during a lame-duck Congress is an open question. As noted, lame-duck periods are complicated, especially if control of the Senate changes in the next Congress. The final weeks of the Democratic administration may prompt Democratic senators to act on AI, but an AI bill will likely compete with other legislative priorities. Furthermore, while all of the AI-related bills discussed during the subcommittee hearings have bipartisan support, Republicans, who hold majorities in both chambers, want to wait until the start of the next Congress to take action on AI. You might be thinking. The prospects for passage of these bills remain uncertain, but the schedule of the current Senate session will become clearer in the remaining three weeks.
We will continue to monitor, analyze, and issue reports on developments in AI law during the lame duck Congress and the 119th Congress.
Matthew Tikhonovsky also contributed to this article.