Skip to main content

Nvidia releases a recipe to make GPU computing ubiquitous in data centers

Nvidia's HGX platform
Image Credit: Nvidia

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


Nvidia wants its graphics processing units (GPUs) to handle the artificial intelligence computing and other heavy-duty parallel processing tasks in giant data centers. To that end, the company is launching a program that makes it easy for its big manufacturing partners to package its GPUs in AI supercomputers.

Nvidia HGX is a kind of starter recipe for original design manufacturers (ODMs) — Foxconn, Inventec, Quanta, and Wistron — to package GPUs in data center computers, said Ian Buck, general manager of accelerated computing at Nvidia, in an interview with VentureBeat. Nvidia CEO Jen-Hsun Huang is announcing HGX at the Computex tech trade show in Taiwan today.

HGX has already been used as the basis for the Microsoft Project Olympus initiative, Facebook’s Big Basin systems, and Nvidia DGX-1TM AI supercomputers. Using the recipe, ODMs can quickly design GPU-based systems for hyperscale data centers. Nvidia engineers will work closely with ODMs to minimize the amount of time to deployment.

As the overall demand for AI computing resources has risen sharply over the past year, so has the market adoption and performance of Nvidia’s GPU computing platform. Today, 10 of the world’s top 10 hyperscale businesses are using Nvidia GPU accelerators in their data centers.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

And soon Nvidia will ship its Volta-based AI GPUs with three times the performance of the predecessor chips.

“The growth of AI is really happening, and that is driving interest in GPUs in the cloud,” said Buck. “Every major cloud computing provider is adopting GPUs, including Google, Amazon Web Services, Tencent, Alibaba, and Microsoft Azure. We’re going to work closely with the Taiwanese ODMs to build servers that power the data centers for the cloud.”

Nvidia built the HGX platform to meet massive scaling tasks. HGX can combine GPUs and central processing units (CPUs) in a variety of ways for high-performance computing, deep learning training, and deep learning inferencing. All of those are important in modern AI processing.

“Working more closely with Nvidia will help us infuse a new level of innovation into data center infrastructure worldwide,” said Evan Chien, head of IEC China operations at Inventec, in a statement. “Through our close collaboration, we will be able to more effectively address the compute-intensive AI needs of companies managing hyperscale cloud environments.”

The standard HGX design includes eight Nvidia Tesla GPUs, connected in a mesh using Nvidia’s NVLink high-speed interconnect system. Both Nvidia Tesla P100 and V100 (Volta-based) GPU accelerators are compatible with HGX. This allows for immediate upgrades of all HGX-based products once V100 GPUs become available later this year. A typical server with the Nvidia technology will soon be able to process 960 teraflops in deep learning applications, compared to two teraflops for typical CPU servers today, Buck said.

“We are defining the server architecture for AI in the cloud that can standardize across everybody,” Buck said. “Taiwan builds the world’s servers, and this is a standard server platform. It’s going to enable a massive performance difference.”

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.