President Trump’s new AI adviser, David Sachs, accused Anthropic of weaponizing regulations to the detriment of startups.
White House “AI czar” and venture capitalist David Sachs accused AI company Anthropic of running a “sophisticated regulatory acquisition strategy based on fear-mongering.” In a post on X, Sachs claimed that the company is “primarily responsible for the state regulatory frenzy that is damaging the startup ecosystem.”
The complaint focuses on Anthropic’s stance on the U.S. AI bill. According to Bloomberg, Anthropic co-founder Jack Clark called Sachs’ attack “disconcerting.” Clark said the company is “very aligned” with the administration on “many areas” but has “slightly different views” on some. Antropic expressed these views “in a substantive and fact-based manner,” he added. He found it “very strange” that other countries had not acted similarly, suggesting this “says more than anything about where we are in this country’s history”.
Human World Supports California Transparency Act SB53
The controversy appears to have stemmed from Anthropic’s support for California Senate Bill 53, a landmark law that imposes transparency requirements and whistleblower protections on AI developers. The bill was signed by Gov. Gavin Newsom at the end of September and is scheduled to go into effect in 2026. Bloomberg notes that Anthropic is the only major AI company to publicly support the bill. OpenAI said for the first time after the bill was passed that it could “coexist.”
advertisement
Clark said Anthropic supported SB53 only because federal lawmakers were unable to make progress at the national level. He said uniform federal standards would be desirable, but argued that “the federal government does not have a track record of moving large policy packages particularly quickly.” Anthropic has already proposed its own transparency framework as a potential model for federal law. Regarding X, Clark explained that simple rules with thresholds that protect startups can benefit “the entire ecosystem.”
“It’s the same as having a label on the side of the AI product you use. Everything else has a label, from food to medicine to aircraft. Why not AI?” he added. Clark said the goal was to encourage responsible innovation while avoiding the kind of “reactive and restrictive regulatory approach” that “unfortunately has occurred in the nuclear industry.”