UPDATED 14:22 EDT / FEBRUARY 27 2020

CLOUD

Google: Our data centers are now twice as energy-efficient as a typical enterprise facility

Google LLC has revealed that, thanks to innovations such as its Tensor Processing Unit artificial intelligence chips, its data centers are twice as energy-efficient as the typical enterprise data center.

Urs Hölzle, the head of engineering for Google Cloud, shared the milestone in a blog post today.

The announcement was timed to coincide with a new paper published in the academic journal Science about the power consumption of the world’s information technology infrastructure. The paper, co-authored by Stanford professor Jonathan Koomey, examines data centers’ electricity usage from 2010 to 2018.

The key finding is that data centers’ power consumption rose only 6% in the eight-year period even as the amount of computing done in those facilities soared 550%. As a result, IT infrastructure accounts for about 1% of global electricity consumption, roughly the same proportion as in 2010.

The efficiency improvement can be partially credited to the fact that many workloads have shifted from enterprise data centers to more sophisticated, hyperscale facilities operated by cloud providers such as Google. A January report by analyst firm TECHnalysis Research, LLC estimated that 30% of all enterprise workloads now run on one of the major public clouds. 

Research has consistently shown that hyperscale (meaning very large) data centers are far more energy efficient than smaller, local servers,” Hölzle wrote in the blog post, referring to research released by the U.S. Energy Lawrence Berkeley National Laboratory in 2016.

The executive went on to share new data about just how much more power efficient Google’s data centers are on average. 

“Google data center is twice as energy efficient as a typical enterprise data center,” Hölzle wrote. “And compared with five years ago, we now deliver around seven times as much computing power with the same amount of electrical power. The average annual power usage effectiveness for our global fleet of data centers in 2019 hit a new record low of 1.10, compared with the industry average of 1.67.”

This energy efficiency is the combination of several factors. One is an AI-powered cooling management system for data centers that Google first detailed in 2018, which according to Hölzle delivers energy savings of around 30% for its facilities. The executive also pointed to the internally designed TPUs on which Google runs its AI workloads, which in 2017 were described as up to 80 times more power-efficient than commercial chips. 

Google Cloud operates a total of 22 data centers globally. The newest opened just this morning in Salt Lake City, Utah, and several more are set to launch in the coming quarters as part of the $10-billion-plus expansion plan Alphabet Chief Executive Officer Sundar Pichai outlined this week. 

Image: Google

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU