BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Google Cloud Next 2024: The Conference That Wasn't About The Cloud

Following

At the recently concluded Cloud Next 2024 conference, Google made a slew of announcements emphasizing its commitment to generative AI. Though the conference is usually associated with cloud infrastructure, platform and related services, the theme of generative AI overshadowed everything else. The top executives representing various divisions and organizations within Google Cloud highlighted how generative AI is infused into their products and services.

A standout revelation from Google Cloud Next 2024 is the pivotal role that generative AI now plays within Alphabet Inc., with Gemini taking center stage. This shift underscores Google Cloud's critical position in spearheading the next wave of innovation for the tech giant.

As Gemini integrates deeply into Google's ecosystem, it not only enhances the strategic importance of Google Cloud but also positions it as a driving force behind Alphabet's future growth and technological advancements.

Though there have been advancements made to core infrastructure services such as Compute Engine, Kubernetes Engine, and Cloud Storage, the keynotes were utilized to showcase how Google’s generative AI technologies are superior to competitors. Compared to keynotes from other hyperscalers, generative AI led by Gemini stole the show.

Here are 5 key takeaways for the enterprise:

Consolidating and Streamlining the Brand Gemini

After experimenting with multiple brands and messaging, Google finally brought all its generative AI offerings under one roof - Gemini. This new branding exercise helps Google simplify its offerings across consumer, developer, and enterprise audiences. Gemini 1.5 Pro, the latest multimodal AI, underpins all the products and services available through Google Cloud and Google Workspace.

Duet AI, the brand that was launched at the last Cloud Next event, was swiftly replaced by Gemini. Duet AI for developers becomes Gemini Code Assist and Duet AI for Google Cloud becomes Gemini Cloud Assist. Gemini for Google Workspace replaces Duet AI for Google Workspace. So, it’s obvious that Google wants to consolidate and streamline all its generative AI efforts under Gemini.

Google referred to PaLM 2 as the forerunner of Gemini, and it powered the Duet AI user interface. With the new branding, the language model, which is the brain, and the tool aimed at the users share the same name. By consolidating the brand, Google has given a catchy, consumer-friendly name to its AI in the form of Gemini. This is a strategic move to capture the mindshare and recall of diverse target segments, ranging from consumers to enterprises.

AI Agents Are Everywhere

Google is extending its investments in generative AI through customized, vertically integrated AI agents powered by Gemini. Agents are going to be built by Google’s partners and customers by leveraging the intelligence offered by generative AI models and their private data stored in the cloud and on-premise environments. They will be able to mimic the functionality and automation capabilities offered by some of the first-party agents from Google, such as Gemini Cloud Assist for DevOps or Gemini in Security for DevSecOps.

Google has renamed the Vertex AI Search and Conversation service as an Agent Builder, which enables customers to build intelligent automation firmly grounded in their data. Google is bringing the best of Internet Search, Dialogflow, and Gemini into a single, unified, no-code platform to design and deploy custom agents.

Agents are the answer to Microsoft Copilots, which deliver similar functionality. Amazon has also integrated the ability to build custom agents into its generative AI platform, Bedrock.

While Google continues to enhance and offer new agents under the Gemini brand, it’s going to let partners and customers build custom agents that sit next to Gemini-based automation tools.

Overall, the agent strategy is a good move that creates a new ecosystem and eventually a marketplace of intelligent tools that will become available to Google Cloud and Workspace customers.

A Growing Customer Base for Gemini

Google left no stone unturned to make the statement that its AI is superior to that of its competitors. This was clearly evident from the number of customer case studies it showcased during Thomas Kurian’s keynote speech.

While generative AI is still in its infancy, Google has an impressive lineup of enterprises ranging from Mercedes-Benz, Orange, IHG Hotels & Resorts, Best Buy, and Bayer.

Bayer will develop a radiology platform to help radiologists and other companies develop and deploy AI-first healthcare apps, improving efficiency and diagnosis turnaround time. Mercedes-Benz will work with Google Cloud to improve customer-facing use cases in e-commerce, customer service, and marketing using generative AI. Best Buy is using Gemini to create new and more convenient mechanisms for customers to fix product issues, reschedule deliveries, and more.

The enterprise case studies and testimonials highlighted during the keynote certainly put Google ahead of the game.

Investments in Custom Silicon and AI Hypercomputer

Google announced its AI hypercomputing strategy using accelerated computing hardware based on NVIDIA GPUs and homegrown TPUs. The AI Hypercomputer is a performance-optimized infrastructure stack powered by Google Cloud TPU, Google Cloud GPU, Google Cloud Storage and the underlying Jupiter network that provides accelerated training for large-scale state-of-the art models.

Google is also investing in Axion processors, which are their first custom Arm-based CPUs designed for data centers. Axion will be available to customers later this year with industry-leading performance and energy efficiency.

Axion processors combine Google's silicon expertise with Arm's highest-performing CPU cores to deliver instances with up to 30% better performance than the fastest general-purpose Arm-based instances in the cloud, 50% better performance, and 60% better energy efficiency than comparable current-generation x86-based instances. Google has already begun deploying Google services such as BigTable, Spanner, BigQuery, Blobstore, Pub/Sub, Google Earth Engine, and the YouTube Ads platform on current-generation Arm-based servers.

Google has also invested in Titanium, its own data processing unit that offloads the mundane operations of network, storage, and security to a dedicated processor. The combination of Axion and Titanium delivers unmatched scale, performance, and cost efficiency to customers.

Axion and Titanium chips are similar to what Amazon has done with the AWS Nitro System and the Graviton CPU. Microsoft is following a similar strategy with Azure Boost and Azure Cobalt processors.

Advancing the Core Infrastructure with Storage and Compute

Though the keynote did not mention advancements and enhancements to core compute, network, and storage infrastructure, which are critical to cloud computing, Google has significantly improved its cloud infrastructure.

Google continues to emphasize workload-optimized infrastructure through enhancements to Compute Engine virtual machines. The first in the public cloud, Intel 5th Generation Xeon processors power the C4 and N4 families of general-purpose virtual machines. Enabled by Titanium,these instances are designed to support general-purpose workloads by providing a balance of high performance, flexibility, and cost.

Introduced last year, Hyperdisk decouples storage from compute delivering higher throughput. Hyperdisk Storage Pools with Advanced Capacity are announced at Next. In a typical scenario, customers manage block storage capacity and utilization in the cloud on a disk-by-disk basis, which is complex, labor-intensive, error-prone and frequently results in underutilized resources. With Hyperdisk Storage Pools, customers can purchase and manage block storage capacity in a pool that’s shared across workloads. These pools are used to thinly provision individual volumes, which only use capacity when data is actually written to disks and take advantage of data reduction techniques like compression and deduplication.

Google Kubernetes Engine, the managed Kubernetes offering, has been optimized for hosting foundation models and LLMs. Customers can add a secondary disk to the nodes that’s preloaded with models, which will significantly speed up the serving process.

Summary

Cloud Next 24 revealed a Google that is fiercely pursuing AI dominance, backed up by significant investments across a broad technology stack and a sharply focused, unified branding strategy based on Gemini. Google's commitment to generative AI was unmistakable, positioning it as the cornerstone of its competitive edge in the evolving technology landscape.

Follow me on Twitter or LinkedInCheck out my website