Google I/O At Google I/O this week, Chocolate Factory claimed the advantage of AI, claiming benchmark top machine learning models, developer tools and some promising products.
It was up to the imagination to recover the billions that we were spending to build our AI infrastructure.
There were some faint lights of glow. Google Meet’s real-time language translation is fantastic. It is the vision of the future of Star Trek that many of us want, in contrast to the pain promoted by social media anger bait and surveillance capitalism.
Google Beam, a 3D conferencing system, has at least a novel appeal. Google proposes it as a “new platform designed to create more meaningful connections.” This sounds a lot like Facebook before the brand takes a picture of so many packages that have moved into the meta. Let’s say that personal connections make more sense when they are not mediated by technology.
However, much of the show focused on promoting the use of Google’s Gemini model family. It’s no coincidence that the biggest boosters of AI developer services (AWS, Google, Microsoft) happen to run cloud platforms.
It could be a tough sales for enthusiasts and entrepreneurs who are aiming to have their business in existence. Estimating the cost of running an AI app depends on the project specifications and requirements, but it’s not trivial, especially when the open source model is running locally.
Even Google’s AI subscription fees will be up to $250/month for all the latest and greatest features plus the recommended Premium Developer Program Membership ($299/year, introduced last November).
Companies can and do so. According to a Cloudzero report, AI status was shared with the register in 2025, with businesses increasing their AI spending. Based on a survey of around 500 software experts, the average monthly AI spend between respondents between companies was up 36% from $62,964 in 2024 to $85,521 in 2025.
At the same time, only about half (51%) of these organizations said they could assess the return on investment for AI commitments. This is roughly the same as when Gartner looked at the issue last year and found that 49% of survey participants struggled to demonstrate the value of the AI project.
Maybe the agent will save the day
Google, like its peers, is betting on its “agent age” to help its AI start to begin. It’s a modest industry in general, but recently, within a few years, it was said that most internet traffic comes from software agents interacting with each other, and there are reasons for skepticism.
Over the years, it has had the ability to automate online interactions through traditional instruction programming. This has led to our current situation. Bot traffic in April 2024 surpassed human traffic on the internet for the first time, according to Imperva.
However, bot traffic may not be desired. For example, why do news publishers pay the bandwidth costs of AI crawlers that don’t generate advertising views or subscriber revenue to scrape away their sites? If further hindered, Google could use that data to create AI summary search results, lowering click-through rates, and actually stealing visitors and attendant revenue from publishers.
Therefore, in an environment where AI model makers harvest training data in an adversarial way, it is expected that bots called AI agents will interoperate without barriers or friction. A harmonious, vendor-independent interoperability fantasy requires new infrastructure like agent name services. This is necessary to separate malicious agents from malicious agents.
For companies in regulated industries, security concerns have severely restricted the way AI is deployed. The register recently spoke with some executives of AI security companies and CIOs from banks and health companies about how they cannot proceed with various AI pilot tests as they are not sure the project meets compliance requirements.
These concerns only grow if the problem acknowledges AI agents the ability to communicate with other agents to determine outcomes.
There is definitely a place for automation. However, non-deterministic automation is tricky. Does anyone really want the AI model to perform unexpected actions to solve the problem if the results are undesirable? When an agent is dealing with unit testing of software, or when he is dealing with drafting a pull request that can be reviewed, it is probably not when the agent is actually in action. Also, if the AI is only performing the expected actions, why should it be AI, as opposed to a simple program’s decision tree?
Software agents may not be as useful as Google and its peers hope. Concerns about competition, security and costs cannot simply be desired.
Anyway, Google is trying to do that. The company that has pioneered personal data hoarding to promote AD business wants to allow Gemini models to read data stored on Google services to personalize AI output.
Here’s how Google CEO Sundar Pichai explained in his blog post: “If your friend sends you advice on a road trip you’ve done in the past, Gemini can do the task of searching for past emails and files on Google Drive, such as a itinerary you created in Google Docs. It sounds like you are.”
Seriously, if your friendship ideas are sending people seasoned with favorite words culled from Google Drive and Gmail documents, ask Gemini how to become a better person. ®