US President Donald Trump signed in his oval office on Monday, February 10, 2025 with Secretary Commerce Howard Lutnick in his oval office. (Official White House photo by Abe McNutt)
The recent decision by U.S. Secretary of Commerce Howard Luttonick may look like another act of bureaucratic housekeeping by rebranding the U.S. AI Safety Institute (AISI) as the AI Standards Innovation Center (CAISI). However, this one-letter shift is no coincidence. This marks a deeper change in national priorities for AI development.
When it comes to AI governance, language is by no means neutral. How you describe the institution reflects how you understand their purpose. And in this case, renaming the name of AISI indicates a pivot between two competing visions of AI governance. One emphasizes long-term risk mitigation and public accountability, while the other prioritizes innovation, speed and global competitiveness.
The original AISI, housed within the National Institute of Standards and Technology (NIST), embodied its original vision. It was established in two important facilities. The first is that “beneficial AI depends on the safety of AI, and secondly, “AI safety depends on science.” In its creation, AISI outlined its core mission to promote the development of standardized metrics for Frontier AI, coordination with global partners on risk mitigation strategies, and the science of safety testing and verification.
Caisi’s revised mission reflects a subtle but determined shift towards a second vision: acceleratingism. As Secretary Lutnick said:
For too long, censorship and regulations have been used under the guise of national security. Innovators are no longer restricted by these standards.
If AISI reflects the values of safety advocates, CAISI appears to match actors like Openai and Andreessen Horowitz.
In March, Openai submitted a response to a White House request for comment in support of the AI Action Plan. Aisi has been suggested to be “Reimagin (Ed)” as “single efficient “front door” to the government. The idea is to streamline engagement between federal agencies and commercial actors, not subject to state law patchwork. In other words, the speed of scrutiny. This laissez-faire approach is also proven by the proposal to suspend state laws that were just stripped of the budget and settlement bills before they proceed in the Senate.
This accelerator vision has gained traction. But it raises an important question. Who defines Caisi’s “standard”? What is the value that shapes them? What about safety protocols designed to advance AISI?
From a governance perspective, this shift should have something to do with us. An approach focusing on the security and operational aspects of technology is well documented and measurable, but potentially narrow. In contrast, “safety”-based means a broader systematic commitment to minimizing harm, explaining long-term risks, and ensuring that new models do not lead to catastrophic threats.
What is even more concerning is that this transition ignores the voices of civil society. We analyzed 10,068 public comments submitted in support of preparing the AI Action Plan. While 41% of large technical submissions supported acceleratorism, the public overwhelmingly prioritized fairness, accountability and safety. Nearly 94% of civil society respondents focus on public interest, responsible AI advocacy, and safety that calls for relief mechanisms and democratic surveillance, as well as public interest, responsible AI advocacy, and innovation.
If Caisi is to fulfill his mission to serve this country, it must look beyond a single perspective. It must be a platform for pluralism: where national security, public safety and innovation are comparable partners in governance. This means prioritizing transparency in how standards are set, maintaining long-term safety research and building mechanisms for meaningful participation from academia and the broader public.
Today’s claims about light regulation mask the unregulated agenda reconstructed as a defense against so-called immature government invasions. But the real challenge is not too few or too many regulations. It is to design an adaptive model for monitoring. Alternatives such as AI sandboxes, dynamic governance models, and multi-stakeholder regulatory organizations have already been proposed on the table. CAISI, if properly placed, can act as a key first node and lay the foundation for a responsive AI governance framework.
Words are important. The same goes for the institutions. Optics isn’t the only rebranding of Caisi. It codifies governance intentions. It is clear what management is looking for: speed, streamlined approval, and limited regulatory resistance. Unlocking the pivot from safety to security standards will not only accelerate innovation, but accelerate past accountability.
It is the rest of us that demands balance.