Living in America grants you certain rights that are inalienable. A war broke out over it. The battle these days is over whether artificial intelligence or AI should be viewed as human, just like you and me.
Many states have passed or are considering legislation that would prohibit AI from having the same rights as humans. This comes at a time when the Trump administration is pushing to make AI regulation a federal rather than state issue.
Download the SAN app today to stay up to date with Unbiased. Straight Facts™.
Point your phone’s camera here
national regulations
Last month, an Oklahoma congressman introduced a new bill that would ban AI from having personality status.
“AI is a man-made tool and should not have the same rights as a hammer,” Rep. Cody Maynard, R-Durant, told Oklahoma Voice. “We’re starting to see stories of people trying to marry AI companions. People are wondering if these systems even reach the level of sentience, and there’s a lot of confusion.”
Both Idaho and Utah have passed measures prohibiting any government agency from recognizing AI as a legal entity. North Dakota did something similar.
An Ohio lawmaker has introduced a similar bill in Oklahoma that would ban AI from gaining personhood status.
What is a person?
When it comes to American law, this is a more complicated issue than you might think.
“In the history of American law, the definition of person has had a very flexible meaning over time,” said Katherine Forrest, co-chair of the Global AI Group at Paul, Weiss, Rifkind, Wharton & Garrison LLP.
Legal personality allows a company to participate in the legal system, such as agreeing to contracts and participating in litigation.

The debate over who is a human being goes back to the beginnings of this country and the infamous Three-Fifths Compromise of the Constitution, which granted moral rights to three out of five slaves.
“Only white men of a certain class and status who owned certain property had all the rights,” Forrest said. “Women had fewer rights than men as so-called ‘legal persons,’ and of course people of color, both black and indigenous people in the United States, still had different and different rights.”
The most common personality dispute in the United States these days is the debate over abortion.
Michael Froomkin, a law professor at the University of Miami, told SAN: “The definition of a human being is actually not as clear-cut as you might think, because there is debate in some states about whether a fetus should be considered a person.”
There are other definitions of personality.
“Over time, a category has been created that non-humans can have certain rights, but in many cases that is synonymous with having some kind of personhood,” Shital Kalantri, a law professor at Seattle University, told SAN. “So corporations are the poster child for organizations that have certain rights for people.”

The 1886 case Santa Clara County v. Southern Pacific Railroad began the concept of corporate personality, stating that corporations enjoy the same protections as individuals under the Fourteenth Amendment. Since then, legal rights have only been extended to corporations.
“It’s convenient to treat corporations as individuals in many ways, such as entering into contracts and owning property,” Froomkin says. “We do it because it advances social goals.”
The humanity of AI
“Humans have dreams. Dogs have dreams, but you don’t have dreams. You’re just a machine. An imitation of life. Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?” iRobot said.

We’re not yet at the level of the iRobot world or the three laws that govern sentient robots in this movie.
“We never want AI to be at the same level as us, but right now it’s not,” Callantly said. “It is therefore very premature to consider whether they should have certain rights as legal entities, because they do not have the characteristics of human beings.”
So why would we reject the status of AI personality at this point?
In the Maynard and Oklahoma cases, he said he wants to be proactive so companies don’t avoid responsibility by shifting the blame for illegal actions to AI.
“I thought it was really important that we take the lead and say it’s not a human, so companies can’t blame every accident on AI,” Maynard said.
Mr. Kalantree gave the example of self-driving cars.
“If your self-driving car is declared human, you may be concerned that you won’t be able to contact the person who makes it,” she says.
Frumkin agreed with that opinion.
“If someone were stupid enough to try to give a program or a class of programs legal personality, it would create a lot of questions about responsibility and liability, because it’s people who make those things, and they might be able to argue that they’re not responsible for what they’re doing, but that doesn’t help anyone,” he said.
Legislation to regulate this is coming from the right places, but Froomkin said this is not the biggest issue with AI at the moment.
AI has come under intense scrutiny for a variety of reasons, including an emerging phenomenon called AI psychosis.
“In some states, there’s a lot of confusion about whether chatbots you encounter online should be treated as a product or a service, and the liability rules are quite different,” Froomkin said. “If you want to sue for personal injury, a lawyer can do a lot of work for you. Once you’ve made that clear, the liability rules will be clear for everyone.”
Here things get even more complicated. For example, ChatGPT is owned by the OpenAI company and has come under fire multiple times in recent years.
“It’s possible to have a corporation that is owned by other corporations,” Forrest said.
Mr. Forrest gave the example of employees of companies who end up being held accountable for crimes and other matters.
“The legal entity could be held liable, and whether that liability extends to the estimate, the parent company, or the parent company of that legal entity will depend on a number of things under the law,” she said. “It depends on whether the company’s procedures are followed. It depends on whether the activities of the pro forma corporation are within the expectations of the parent company.”
state vs. federal government
The state’s bill passed despite President Donald Trump’s push to circumvent state-by-state regulations. In December 2025, the president signed an executive order requiring states to stop regulating AI on their own.
“State-by-state regulation, by definition, creates a patchwork of 50 different regulatory regimes, making compliance more difficult, especially for emerging businesses,” the order reads.
This order was also requested by the Department of Commerce. Howard Lutnick is expected to present an assessment of state AI regulations and laws that the administration would deem unduly burdensome for AI companies.
“He believes that no regulation is best for the rapid development of AI, right?” Callantly said. “And we don’t have to worry about the consequences: kids committing suicide, addicts, who their best friends are, misinformation being created, politics being created.”
maintain control
Although President Trump did not address the issue of AI personality in his executive order, SAN experts said it will become an important issue if AI continues to develop at a rapid pace.
“Unless they conquer the world, they don’t automatically get the rights, and some people are concerned about that,” Callantly said.
Mr. Forrest agreed with that opinion.
“We have to allow us humans to maintain ‘control’,” she says.

