As the 2025 legislative calendar begins, FIRE is preparing to introduce a slew of bills targeting artificial intelligence to legislators at the state and federal level.
The First Amendment applies to artificial intelligence just as it applies to other expressive technologies. Like the printing press, the camera, and the internet, AI can be used as an expressive tool. This is a technological advancement that helps us communicate with each other and generate knowledge. As FIRE Executive Vice President Nico Perino argued in the Los Angeles Times last month, “We shouldn’t be rewriting the Constitution with every new communications technology.”
We urge legislators to ensure that existing laws, protected by limited and well-defined exceptions to the First Amendment’s broad protections, protect against the harms that the Legislature may seek to combat next year. We remind you again that we have already taken care of most of it. For example, laws against fraud, forgery, discrimination, and defamation apply regardless of how the illegal act is ultimately carried out. Responsibility for tortious acts properly lies with the perpetrators of the acts and not with the information and communication tools they use.
Some legislative efforts to regulate the use of AI raise well-known First Amendment issues. For example, proposed regulations that would require “watermarks” on artwork created by AI or disclaimers on AI-generated content would violate the First Amendment by forcing speech. . FIRE has opposed these types of efforts to regulate the use of AI. Just as we have fought against government attempts to force speech in our schools, campuses, or online, we will continue to do so.
Rather than imposing mandatory disclaimers or content-based restrictions on AI-generated expressions, legislators should consider using laws that already protect against defamation, fraud, and other illegal activities. Don’t forget that you are there.
Lawmakers also want to regulate or even criminalize the use of AI-generated content in election-related communications. However, courts are wary of legislative attempts to regulate AI output when political speech is involved. For example, following a First Amendment challenge by a satirist who uses AI to generate parodies of politicians, a federal district court recently decided to restrict election-related content that is “grossly deceptive.” Ordered an injunction against a California law targeting “deepfakes.”
Content-based restrictions like the California law require rigorous judicial review, no matter how the expression is created. As the federal court noted, the constitutional protection “protects the public’s right to criticize the government and government officials, and it also applies in a new technological era where media may be digitally altered.” “It will be done.” So while lawmakers may have “a well-founded fear of a digitally manipulated media environment,” the court found that “this fear is based on long-standing critiques protected by the First Amendment; “This does not give lawmakers permission to disrupt the tradition of parody and satire without limit.” ”
Artificial Intelligence, Free Speech, and the First Amendment
Publication page
FIRE provides frequently asked questions about artificial intelligence and analysis of its potential impact on free speech and the First Amendment.
read more
Other bills threaten the First Amendment by imposing direct burdens on developers of AI models. For example, in the coming months, Texas lawmakers are planning to use “algorithmic discrimination,” including by private actors. The bill would give broad regulatory powers to the newly created state Artificial Intelligence Council and impose significant compliance costs. TRAIGA requires developers to publish periodic risk reports, but this requirement applies to the expressive output of AI models and the use of AI as a tool to facilitate protected expression. , would raise First Amendment concerns. Last year, a federal court ruled that similar reporting requirements imposed on social media platforms were likely unconstitutional.
TRAIGA’s provisions avoid the possibility of providing recommendations that some may view as discriminatory or simply offensive, even if doing so compromises the usefulness or functionality of the model. We encourage AI developers to handicap their models. Addressing unlawful discrimination is an important legislative objective, and legislators have a duty to ensure that we all benefit from equal protection of the law. At the same time, from our decades of work defending the rights of students and faculty, FIRE is well aware of the chilling effects on speech that result from expansive or arbitrary interpretations of anti-discrimination laws on campus. We oppose any poor legislative efforts to functionally incorporate similar coldness into artificial intelligence systems.
The far-reaching scope of bills like TRAIGA flies in the face of the expression rights of people who build and use AI models. Rather than imposing mandatory disclaimers or content-based restrictions on AI-generated expressions, legislators should consider using laws that already protect against defamation, fraud, and other illegal activities. Don’t forget that you are there. And rather than pre-emptively holding developers broadly responsible for the potential outputs of their AI models, lawmakers are now focusing their efforts on those who seek to use AI and other communication tools for illicit purposes. There is a need to consider existing legal remedies already available to victims of discrimination.
FIRE will likely have more to say in the coming weeks and months about the First Amendment threat posed by proposed AI legislation.