UPDATED 23:00 EDT / DECEMBER 12 2018

IOT

Nvidia ships miniaturized Jetson AGX Xavier machine learning chip for robots

Nvidia Corp. today started shipping Jetson AGX Xavier, a miniaturized machine learning chip geared toward industrial robots and other autonomous machines.

The company first released the module (pictured) on a limited basis last year as part of a development kit for early adopters. This limited launch enabled Nvidia to build up an impressive lineup of initial customers ahead of today’s launch.

Chinese e-commerce giants JD Inc. and Meituan Dianping are using Jetson AGX Xavier to build a fleet of delivery robots, while a British startup called Oxford Nanopore Technologies Ltd. is developing a handheld DNA sequencer. Other uses include industrial robots doing optical inspection, preventing vehicles at construction sites from accidentally hitting people and tracking pesticide use on farms.

There’s going to be millions of autonomous machines,” Deepu Talla, an Nvidia vice president and general manager of autonomous machines, said in a press briefing Wednesday evening.

Patrick Moorhead, president and principal analyst at Moor Insights & Strategy, told SiliconANGLE at the briefing that this is a “milestone” on the longstanding vision of autonomous vehicles and devices beyond well-known self-driving cars.

“What you’re looking at is the second wave of autonomous vehicles after cars, enabled by Xavier,” he said.

Jetson AGX Xavier can perform up to 32 trillion computational operations per second. According to Nvidia, that’s comparable to the performance of some graphics processing units used in professional workstations.

The difference is Jetson AGX Xavier’s footprint. The chip is small enough to fit in a person’s palm and can run on as little as 10 watts, 10 times less power than what would be required for one of Nvidia’s enterprise-grade GPUs. The electricity supply may be increased to 15 or 30 watts if a machine requires extra processing capacity.

Under the hood, Jetson AGX Xavier packs more than 9 billion transistors distributed among various chips. The workhorse at the heart of the module is a Volta GPU with a maximum clock frequency of 1.37 gigahertz and 576 cores. According to Nvidia, 64 of those cores are Tensor Cores, specialized circuits designed to speed up machine learning algorithms.

The GPU is coupled with an eight-core central processing unit, two accelerators optimized for computer vision tasks and 32 gigabytes of onboard flash storage. Jetson AGX Xavier also provides a wealth of connectivity options that let developers hook up multiple sensors to a system.

Jetson AGX Xavier joins the company’s existing Jetson TX1 and Jetson TX2 modules, which also target autonomous machines deployed at the network edge. Komatsu Ltd., one of the world’s leading makers of heavy equipment, last year joined with Nvidia to use the TX2 in a project aimed at improving worker safety on construction sites.

Nvidia is offering the chip for $1,099 with volume orders of more than 1,000 units.

The company isn’t the only chipmaker targeting autonomous vehicles. Intel Corp. in particular has targeted drones and fleets of drones, and it bought chipmaker Movidius Ltd. in 2016 to gain expertise in computer vision used in drones. But Moorhead said Nvidia can leverage its chips and software developed for heavy-duty computation in data centers in Xavier.

Although this chip is aimed at low-power applications, speed remains paramount to Nvidia across its product line. In another announcement earlier today, the company said it has set six records in AI performance. It claimed the records using its DGX systems that employ its top-of-the-line Tensor Core GPUs running a half-dozen benchmarks in the new MLPerf suite backed by Nvidia, Google LLC, Intel Corp., Baidu Inc. and dozens of other companies.

The upshot, Ian Buck, vice president and general manager for Nvidia’s accelerated computing, said in a press briefing, is that “we are the most cost-efficient AI platform.”

But it’s an ongoing race. Today Google Cloud also submitted MLPerf numbers for compute services using its Tensor Processing Unit machine learning chips. It claimed the service offers the “most accessible scale” for training machine learning models.

Nvidia’s news comes against the backdrop of a steep fall in its stock price. It’s down 48 percent since Oct. 1 on investor concerns about U.S. trade relations with China and a potential slowdown in the chip industry. And today, Bloomberg reported that sources say SoftBank Group Corp. is looking at selling its $6 billion stake in Nvidia early next year.

With reporting from Robert Hof

Photo: Nvidia

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU