Digital Economy

Google Cloud Next 25: from inference chips to updated Ai models

Kicking off the annual event where Google Cloud announces its most important innovations that it will bring to market in the coming months All news

by Giancarlo Calzetta

5' min read

5' min read

LAS VEGAS - Google Cloud Next, the annual event at which Google Cloud announces its most important innovations that it will put on the market in the coming months, is about to begin, and 20 minutes before CEO Thomas Kurian's speech, musical entertainment created with AI tools gets underway: music and video have been created using gen-AI tools including the new Veo 2 video creation system. The editing is still human, the direction to be given to tools and frames is human, but the creation of the individual pieces that are fused together is mostly the result of inference. A perfect expression of the AI that creates superpowered professionals capable of doing the work of days in a matter of hours. After all, the thread running through the entire event is precisely that of AI that 'really serves', a myriad of tools capable of changing (for the better) the way people work.

So we start with some numbers. Kurian tells us that Google has 2 million miles of cables running around the world, over 4 million developers using Gemini to code, a steady acceleration in the number of people using Vertex AI, the gen-ai suite for professionals, which has grown twenty-fold in a year, and over 500 success stories of Google Cloud technology applied to businesses. An introduction that anticipates the announcements being made by Sundar Pichai (a curious twist of fate that surname ending in 'AI'), CEO of Google and Alphabet.

Loading...

And the first is an announcement that 'you don't expect': all those kilometres of cables seem to Pichai almost wasted to run 'only' Google's infrastructure, and it is so powerful that it evidently has enough capacity to be made available to external customers. Therefore, the Cloud Wide Area Network (Cloud WAN) is launched, which makes Google's global private network available to companies. This infrastructure offers up to 40 per cent higher performance than the public Internet, providing fast and reliable connectivity for business operations on a global scale. Access to such a large network enables businesses to improve operational efficiency and offer more responsive services to their customers. Because in the end, that's what it's all about: AI opens up markets and improves efficiency, but it needs infrastructure and Google says it can provide it at a 40 per cent savings over traditional infrastructure.

And speaking of infrastructure, we come to the big news: Ironwood, its seventh artificial intelligence processor, designed to accelerate AI applications. This chip is optimised for real-time processing, such as that required for chatbots and virtual assistants, offering twice the performance per unit of energy compared to its predecessor, Trillium. Ironwood represents a step forward in both energy efficiency and computing power. We don't yet know when it will come to market, but now that Pichai has prepared the ground for the infrastructure, we come to the announcements more related to the actual AI.

Gemini 2.5: advanced models for different needs

As part of its family of AI models, Google has announced Gemini 2.5 Flash, which completes the offering following the introduction of Gemini 2.5 Pro at the end of last year. These two models are designed to tackle complex tasks requiring in-depth analysis, such as interpreting legal or medical documents, and Gemini 2.5 Flash is optimised for applications requiring low latency and high efficiency, such as responsive virtual assistants and real-time synthesis tools. The choice gives companies the flexibility to adapt to their specific operational needs as the Flash version costs significantly less than the Pro version per use.

Upgrades in Vertex AI: enhanced tools for developers

.

Vertex AI, Google's platform for creating and managing AI applications, received significant updates. It now offers access to more than 200 ready-to-use AI models (not only from Google, but a little bit of all those available on the market) and introduces new features that simplify the integration of geographic data, performance monitoring and model optimisation. Furthermore, thanks to Live API, it will be possible to stream audio and video in real time, opening the door to innovative and immersive applications.

Worthy of note is the fact that all the multimedia generation functions have made great strides forward and some are new and already impressive. Let us start with Lyria, a tool that creates music from a textual description. This is the tool used to compose music played before the event and the process of creating backing tracks, clips and jingles is truly remarkable both in terms of simplicity and the quality of the result. Juxtaposed to this is Veo 2, the new version of the movie maker. It too showed off its capabilities in the 'pre-event' moments, but needs to be put to the test more to see how it handles the traditional problems of consistency between different videos, even though it boasts functions such as 'start and end position' that provide automatic animation of the subject inserted 'externally'. Chirp3 is a system that produces customised speech using 10-second clips as a basis and can do the reverse job, transcoding a conversation, recognising different subjects engaged in complex dialogues.

Agent AI is always on the crest of a wave

.

Obviously, many of these innovations point to a future of multi-agent systems, and Google Cloud promotes an open and collaborative ecosystem with the new Agent Development Kit (ADK), which enables companies to simplify the process of creating multi-agent AI systems. The new Agent2Agent protocol will allow AI agents to communicate with each other regardless of the technology used, while Agent Garden provides tools and examples to quickly integrate agents with existing enterprise data. Agent2Agent, in particular, is an important piece because it represents an attempt to establish a communication standard between agents created by different companies and corporations. If it succeeds in catching on, it will greatly speed up the spread of AI.

Google Agentspace and Workspace: AI for every employee

.

Corporate productivity is one of the fields where the application of AI finds fertile ground, and so Google is preparing a very complete ecosystem to maximise the ease of use and effectiveness of the new tools. Agentspace, for instance, is designed to bring artificial intelligence to the heart of employees' daily activities, facilitating access to corporate information and enabling the creation of customised agents without writing code. Among the new features introduced are search integrated directly into Chrome and specific tools for idea generation and in-depth research, which simplify decision-making and creative work.

The Google Workspace suite also receives important new functions, such as 'Help me Analyze' for Google Sheets and 'Audio Overview' for Documents, which make it possible to automate complex tasks such as data analysis and the creation of audio content.

Copyright reserved ©

Brand connect

Loading...

Newsletter

Notizie e approfondimenti sugli avvenimenti politici, economici e finanziari.

Iscriviti