Washington — As artificial intelligence reaches a pivotal point in its development, the federal government is moving from prioritizing AI safeguards to one focused on eliminating red tape.
While this is an encouraging prospect for some investors, it raises uncertainty about the future of the technology’s guardrails, especially when it comes to the use of AI deepfakes in elections and political activities.
President-elect Donald Trump has vowed to rescind President Joe Biden’s sweeping AI executive order aimed at protecting people’s rights and safety without stifling innovation. He hasn’t said what he would do in its place, but the platform of his recently reorganized Republican National Committee calls for AI development to be “rooted in free speech and human flourishing.” It is said that
It is an open question whether the soon-to-be fully Republican-controlled Congress will be interested in passing AI legislation. Interviews with more than a dozen lawmakers and industry experts revealed continued interest in promoting the use of the technology in national security and cracking down on non-consensual and explicit images.
But the use of AI in elections and the spread of misinformation is likely to take a backseat as Republican lawmakers turn away from what they see as potentially stifling innovation and free speech.
“AI has incredible potential to improve human productivity and positively benefit the economy,” said California Republican Rep. Jay Obanorte, widely seen as a leader in evolving technology. . “We need to strike the right balance between enabling innovation while putting frameworks in place to prevent harmful events from occurring.”
Those interested in artificial intelligence have been hoping for comprehensive federal legislation for years. But Congress stalled on nearly every issue, failing to pass artificial intelligence legislation and instead producing a series of proposals and reports.
Some lawmakers believe there is enough bipartisan interest on some AI-related issues to pass legislation.
“I think there are some Republicans who are very interested in this topic,” Democratic Sen. Gary Peters said, citing national security as one area of potential agreement. “I am confident that we will be able to work with them as well as we have in the past.”
The extent to which Republicans want the federal government to intervene in AI development remains unclear. Before this year’s election, few expressed interest in regulating how the Federal Election Commission or Federal Communications Commission handles AI-generated content. They were concerned that the Trump campaign and other Republicans were using the technology for political creation while at the same time raising First Amendment issues. Meme.
When Trump was elected president, the FCC was in the midst of a long process to develop regulations related to AI. That work was then halted under long-established rules covering regime change.
President Trump has expressed both interest and skepticism about artificial intelligence.
In an interview with Fox Business earlier this year, he called the technology “very dangerous” and “very scary” because “there is no real solution.” But his campaign and supporters also accepted the AI-generated images more than his Democratic opponents. They frequently used these in social media posts that were not intended to be misleading, but to further entrench Republican political views.
Elon Musk, a close ally of President Trump and the founder of several companies that rely on AI, has also expressed a mix of concern and excitement about the technology, depending on how it is applied.
During the campaign, Musk used his social media platform X to promote AI-generated images and videos. Operatives from Americans for Responsible Innovation, a nonprofit group focused on artificial intelligence, have publicly pressed President Trump to select Musk as his top adviser on the technology.
“We think Elon has a very sophisticated understatement of both the opportunities and risks of advanced AI systems,” said Doug Kalidas, the group’s top operative. .
But Musk’s advice to Trump on artificial intelligence is worrying others. Peters argued that could lead to a weakening of the president.
“That’s a concern,” the Michigan Democrat said. “If someone has a strong financial interest in a particular technology, you should take their advice with a grain of salt.”
In the run-up to the election, many AI experts have raised concerns about 11th-hour deepfakes, which are lifelike AI images, videos, or audio clips that sway or confuse voters as they head to the polls. did. Although those fears didn’t materialize, AI still played a role in elections, said Vivian Schiller, executive director of Aspen Digital, part of the nonpartisan think tank Aspen Institute.
“I don’t want to use the word that I’ve heard a lot of people use, it was the dog that didn’t bark,” he said about AI in the 2024 election. “It was there, but not in the way we expected.”
The campaign used AI in its algorithms to target messages to voters. While the AI-generated memes weren’t realistic enough to be mistaken for the real thing, they felt believable enough to deepen partisan divisions.
A political consultant imitating Joe Biden’s voice on a robocall could have deterred voters from coming to the polls during the New Hampshire primary if he hadn’t been caught quickly. Foreign actors also used AI tools to create and automate fake online profiles and websites that spread disinformation to U.S. audiences.
Even if AI ultimately did not influence the outcome of the election, the technology made its way into politics and contributed to an environment in which American voters could not be confident that what they were seeing was true. This move is one reason why some in the AI industry want regulations that establish guidelines.
“This is welcome news because President Trump and his team have said they don’t want to stifle this technology, they want to support its development,” said the Software Alliance’s top lobbyist and senior vice president. Craig Albright said. , an industry association whose members include OpenAI, Oracle, and IBM. “We believe that passing a national law setting the rules of the road will have a positive impact on the market development of this technology.”
Similar claims were made by AI safety advocates at a recent conference in San Francisco, said Suresh Venkatasubramanian, director of the Center for Technology Responsibility at Brown University.
“By putting in literal guardrails, lanes, and road rules, we’ve got a car that can go much faster,” says Venkatas, a former Biden administration official who helped craft the White House principles for its approach to AI. Bramanian says.
Rob Wiseman, co-director of the advocacy group Public Citizen, said he wasn’t excited about the prospects for the federal bill and doubted Trump’s promise to rescind Biden’s executive order that created an initial set of national standards for the industry. He said he was concerned. His group advocates for federal regulation of generative AI in elections.
“Safeguards themselves are a way to foster innovation, allowing us to have AI that is useful, safe, doesn’t exclude people, and advances technology in ways that are in the public interest.”
___
The Associated Press receives support from several private foundations to enhance our coverage of elections and democracy, and from the Omidyar Network to support our coverage of artificial intelligence and its impact on society. Masu. AP is solely responsible for all content. Learn more about AP’s Democracy Initiative. See a list of supporters and funding locations on AP.org.