Google’s Rapid AI Expansion with Gemini Models Raises Transparency Concerns

In the ongoing AI arms race, Google is accelerating on an extraordinary scale. With Gemini 2.5 Pro, Google is seeking to boost its AI capabilities to the extreme, redefining industry standards for coding and reasoning. However, despite the furious pace of releasing these advanced models, one key element seems to have fallen behind: their lack of transparency.

Two years after OpenAI introduced active intervention in the industry with the launch of  ChatGPT, Google seems to have ramped up its efforts toward AI development. In March, it announced the introduction of Gemini 2.5 Pro, an advanced AI reasoning model that leads the industry to the forefront of coding and mathematical benchmarks. This was followed by Gemini 2.0 Flash being released only three months earlier as the best model available then.

Director and Head of Product for Gemini at Google, Tulsee Doshi, noted that they clearly have a rapid iteration cycle and that this effort is being done to keep pace with the AI landscape. She said,

“We’re still trying to figure out what the right way to put these models out is—what the right way is to get feedback.”

However, the increasing pace undoubtedly has come at a cost, its transparency. Google has yet to publish safety reports for its latest models, often called model or system cards, for the recently released Gemini 2.5 Pro and Gemini 2.0 Flash models. These reports, standard among AI labs like OpenAI, Anthropic, and Meta, inform the public about model performance, safety testing, and risks associated with using this model.

Lack of Transparency

Quite ironically, Google has been the first company to support model cards for responsible development of AIs. A Research paper in 2019 from Google proposed them as a responsible framework in machine learning. However, as early as this commitment was given, it has been over a year since Google last produced a model card, the last one being for Gemini 1.5 Pro.

Doshi defended Google’s decision to not release a model card for Gemini 2.5 Pro by saying that the model is still considered “experimental.” She emphasized that the company has done internal safety testing and adversarial red teaming. However, a complete model card will be available once the model is commercially accessible to the public.

In another statement, a Google spokesperson said 

safety remains a ‘top priority’ for the company and that more documentation will be made available soon. However, it must be noted that Gemini 2.0 Flash, which is already generally available, is still without a model card.

Significance of AI Safety Reports

System and model cards give the author a very important, sometimes unpleasant look at the AI model. For example, the o1 reasoning model report from OpenAI, which stated that the system showed a capacity to “scheme” against human intentions, this is a pretty scary finding. If Google were to risk this kind of reputational problem by not making these reports available, it could indeed jeopardize trust in its AI systems, contributing to an environment in which independent research can no longer properly assess their abilities and limitations.

Then again, there was a time when Google told the U.S. government and other regulators it would issue safety reports on an AI model whenever it was deemed significant. This commitment to transparency was part of a larger strategy for promoting responsible AI governance. Judging from its current practice, it would seem that Google is opting for speed over safety disclosures.

Regulatory Challenges and Industry Standards

Safety reporting requirements have been a challenge for the broader AI industry, especially in the U.S, where regulations have faced conflict. The bill SB 1047 in California, which aimed to set in place safety reporting standards for AI models, was vetoed after an enthusiastic opposition by representatives of tech companies. The U.S AI Safety Institute, another effort that would create a national level guidelines for AI safety and related disclosures, does not appear to be receiving continued funding under the Trump administration.

Despite the challenges, industry experts argue that the failure to report safety assessments within an acceptable time frame creates a bad example. With increasing power, AI models must be openly analyzed regarding their limitations and risks, which should be understood equally by researchers and the public. A failure in this area of transparency will simply fuel growing distrust in companies that have a potential role in the injustice of AI, whether it be misinformation, security breaches, or bias.

The lack of timely safety reports seriously flawed Google’s race in pushing AI boundaries. If the company is indeed committed to responsible AI development, it should ensure that the transparency keeps pace with the developments. The industry has come to expect public disclosures that would clearly assess AI capabilities and risks. Thus, once a champion of this practice, Google must uphold the very standards it helped set, or lose the race in many ways.

Disclosure: Some of the links in this article are affiliate links and we may earn a small commission if you make a purchase, which helps us to keep delivering quality content to you. Here is our disclosure policy.

Munazza Shaheen
Munazza Shaheen
Munazza Shaheen is an AI and technology researcher with a deep interest in machine learning, automation, and emerging tech trends. Her work focuses on exploring the impact of artificial intelligence on industries, ethical AI development, and future innovations. She actively follows advancements in deep learning, robotics, and AI-driven solutions, contributing insights into how technology is shaping the world.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Popular This Week
Similar Stories
In a bold showcase of futuristic design and green innovation, Kawasaki Heavy Industries has unveiled the Kawasaki Corleo robot—a hydrogen-powered,...
In a bold weekend move, Meta took the tech world by storm with the unannounced release of its new AI...
Trump’s newly imposed tariffs have shaken the US market from top to bottom. From Bitcoin to Apple, all industries are...