If the means justify the edge, we are still operating under the Union article. The founders understood that this instrument, the very structure of government, always serves the purpose of freedom and prosperity. When the means no longer fulfill their ends, they experimented with yet another design for their government – they expected it to be the last.
The age of AI needs to ask whether instruments will end even further, especially if they will further promote individual freedom and the prosperity of the collective. Both of these goals were the best minds for early Americans. They called for a Bill of Rights to protect the former, identifying the latter, namely general welfare, as government animation purposes. Both of these goals are challenged by constitutional doctrines that do not align with or even undermine the development of AI. A full review of these doctrines could (and will likely be) filling the book. But for now, I’m going to raise two.
The first is the principles outside the territory. You’ve never heard of it, but it’s a central part of our federal system: one state cannot govern another state. That legal authority ends at the border. The overall state of the ideological spectrum measures laws that significantly alter the behavior and capabilities of frontier models. Although intentional, many of these laws threaten to project the law (and value) from one state to another. Confusing the Supreme Court case law on this topic means we are concerned with exactly how outside the territory will hurry and map to regulate AI.
Unclear laws hamper innovation, a driving force for welfare in general. As things stand, the lack of a bright line about when state powers to regulate AI will begin and end has been invited to invite state legislators across the country to compete to devise the most comprehensive bill. If these bills find a way to go to law, you can bet the lowest dollar that will continue to litigate.
The court is unlikely to identify the aforementioned lines. In the short term, they will probably develop clear and contradictory tests, as demonstrated by conflicting judicial decisions on fair use doctrine and AI training data. It’s not even worth considering in the long run. Regulatory uncertainty arising from several laws with extraterritorial influences can prevent innovators from coming all in to new ideas. These small decisions are summed up. The total will lose innovation and ultimately prosperity.
Furthermore, the fact that certain states effectively impose their views on others violates individual freedom concerns. Extraterritoriality is part of the Constitution’s call for horizontal federalism, demanding equality between states and, in most cases, prohibiting discrimination against non-residents. When this important structural element is eroded, it reduces one of the main ways that founders try to protect Americans from living again under foreign power.
The second is the right to privacy. Though you cannot find such rights in the Constitution. Instead, it has been found in the other clause “Penumbra.” This general and ambiguous right has created a broader range of privacy laws and norms that generally equal privacy with data sharing constraints. At a high level, this approach to privacy will result in a siloed dataset that may contain data of different formats and different levels of detail. In many contexts, this promotes individual freedom by reducing the likelihood of bad actors gaining access to sensitive information. But now, the vast aggregation of high-altitude data has the potential to develop incredibly sophisticated AI tools. Without such data, some of the most promising uses of AI, such as medicine and education, may never happen. Common welfare concerns place a major strain on privacy approaches that reduce data access.
It is expiring to reconsider and clarify these doctrines. It is also just a small portion of the work that needs to be done to ensure that individual freedom and general welfare are pursued and realized during this chaotic period.
Kevin Frazier is an AI Innovation and Law Fellow in Texas Law and is the author of Appleseed AI Substack.