Google released a preliminary document this week for its latest Gemini 2.5 Pro Reasoning model, but this move comes just weeks after the model became widely available and attracted sharp criticism from AI governance specialists. The document, known as the “Model Card,” came out online around April 16th, but experts argued that it lacked important safety details, suggesting that Google may not have reached a promise of transparency to governments and international organizations.
The controversy comes from the timeline: Gemini 2.5 Pro began preview rollouts for subscribers on March 25th (appearing on March 28th for each Google Cloud Docs using a specific experimental version of Gemini-2.5-Pro-Exp-03-25).
However, model cards detailing safety ratings and restrictions have only surfaced for more than two weeks since this widespread public access began.
Kevin Bankston, a senior advisor to the Center for Democracy Technology, describes the six-page document for Social Platform X as a “slight” document, adding “the refractive tale of AI safety and transparency when companies bring their models to the market.”
Details and unfulfilled pledges
The main concern that Bankston spoke about is the lack of detailed results of important safety assessments, such as “red care” exercises to discover whether AI can be discovered to generate harmful content such as instructions for creating Bioweapons.
He suggested that timing and omission could mean that Google “did not complete safety tests before releasing the most powerful model,” and that it adopted a new policy of “still not completing that test” or withholding comprehensive results until the model is deemed generally available.
Other experts, including Peter Wildeford and Thomas Woodside, have highlighted the lack of specific results or detailed references associated with assessments under Google’s Frontier Safety Framework (FSF), despite the use of cards referencing the FSF process.
This approach appears to contradict some of the publications Google has undertaken regarding AI safety and transparency. These include pledges to publish a detailed report on a powerful new model at the White House meeting in July 2023, compliance with the G7 AI Code of Conduct agreed in October 2023, and commitments made at the Seoul AI Safety Summit in May 2024.
Thomas Woodside of the Secure AI project also noted that Google’s last dedicated publication on dangerous capabilities testing dates back to June 2024, questioning the company’s commitment to regular updates. Google also did not confirm whether the Gemini 2.5 Pro was submitted to the US or UK AI Safety Agency for external evaluation prior to the preview release.
Google location and model card contents
The full technical report is pending, but the released model cards offer some insights. Google provides an overview of its policy. “A detailed technical report will be released once for each model family release, and the next technical report will be released after the 2.5 series is generally available.”
It adds that a separate report on “Dangerous Capacity Assessment” follows “in normal rhythm.” Google previously said that the latest Gemini had undergone “pre-release testing, including internal development and assurance ratings, which were conducted before the model was released.”
The published cards confirm that the Gemini 2.5 Pro is based on a Mixture (MOE) Transformer structure. This is a design aimed at efficiency by selectively activating parts of the model. Details are provided along with training on a variety of multimodal data with the model’s 1 million token input context windows and 64K token output limits, as well as safety filtering tailored to Google’s AI principles.
The card includes a performance benchmark (running with the Gemini-2.5-Pro-Exp-03-25 version) showing the competitive outcomes as of March 2025. It will recognize potential “hastisation”-like limits in January 2025, and set knowledge cutoffs while showing safe measurements including safety measures including internal reviews (RSCs). “Over-regeneration” lasts as a limit.
Is it an industry that competes first?
The situation reflects wider tensions. Sandra Wachter, professor at the Oxford Internet Institute, previously told Fortune: “If this is a car or an airplane, we will bring this to the market as soon as possible and look into the safe side later.
This also faces criticism that Meta’s Llama 4 report is lacking in detail, as Openai could change its safety framework and allow adjustments based on competitors’ actions. Bankston warned that if businesses fail to meet their basic voluntary safety commitments, “it is mandatory for lawmakers to develop and implement clear, inevitable transparency requirements.” Google continued its rollout pace with the launch of a preview of Gemini 2.5 Flash on April 17th.