TNS
VOXPOP
What’s Slowing You Down?
What is your biggest inhibitor to shipping software faster?
Complicated codebase and technical debt.
0%
QA, writing tests, and debugging.
0%
Waiting for PR review or stakeholder approval.
0%
I'm always waiting due to long build times.
0%
Rework due to unclear or incomplete specifications.
0%
Inadequate tooling or infrastructure.
0%
Other.
0%
Edge Computing / Software Development

Researchers ‘Drop the Zeroes’ to Speed Deep Learning

Researchers at the King Abdullah University of Science and Technology (KAUST) are now proposing a method of accelerating distributed deep learning by dropping data blocks with zero values, which are frequently produced during distributed machine learning processes that use large datasets.
Oct 15th, 2021 3:00am by
Featued image for: Researchers ‘Drop the Zeroes’ to Speed Deep Learning
Images: Kelly Lacy via Pexels; KAUST

Researchers at the King Abdullah University of Science and Technology (KAUST) are now proposing a method of accelerating distributed deep learning by dropping data blocks with zero values, which are frequently produced during distributed machine learning processes that use large datasets.

The growing amount of data needed to train increasingly complex AI models are prompting experts to look for more efficient ways to train deep neural networks. One approach is to implement what is known as distributed deep learning, which scales out the training of models over a wider base of computational resources.

While this form of distributed machine learning is more efficient, the size of newer and larger deep neural networks for computationally intensive NLP models like BERT and GPT-3 will soon outstrip the computational capacity of current state-of-the-art supercomputers.

Distributed deep learning is often achieved with data parallelization, a form of parallel computing that distributes the data across different parallel processor nodes, thus boosting efficiency by splitting the computational load across a broader range of resources.

The researchers’ method focuses on what is known as collective communication routines, which are a core component of parallel computing applications, and used to combine data among multiple processes that are operating simultaneously. Collective communication routines have to perform smoothly in order to efficiently scale the workload.

“To enable better scaling, we aim to decrease communication overheads by optimizing collective communication,” wrote the team in their paper, which was presented as part of the 2021 ACI SIGCOMM Conference. “These overheads are substantial in many DNN workloads, especially for large models where there exists a significant gap between the measured performance and ideal linear scaling.”

Dropping Zeroes Speeds up Distributed Deep Learning

During model training, learning tasks are allocated to various computing nodes that compare their results before performing the next job over the communication network. According to the team, communication between these nodes is a major bottleneck in distributed deep learning.

“Efficient collective communication is crucial to parallel-computing applications such as distributed training of large-scale recommendation systems and natural language processing models,” said the team.

As model size grows, the researchers also observed that the proportion of zero values in the data blocks also increased, leading to a phenomenon known as sparsity. While there are already some existing tools for collective communication routines, the team noted that such collective communication libraries don’t support sparse data, which led them to develop their idea.

“We propose OmniReduce, an efficient streaming aggregation system that exploits sparsity to maximize effective bandwidth use by sending only non-zero data blocks. Most existing collective libraries — including DDL-specialized ones like NCCL and Gloo — have no native support for sparse data. These libraries assume dense input data and make inefficient use of precious network bandwidth to transmit large volumes of zeroes.”

OmniReduce builds on an earlier development from KAUST called SwitchML, which uses an aggregation code to optimize the network switches that govern internodal communications, thus increasing the efficiency of data transfers. OmniReduce further streamlines this process by dropping any results with zeroes, without interrupting the synchronization of the parallel computations between nodes. As the team notes, it is challenging to exploit sparsity in this manner, as all nodes have to process data blocks in the same location in a time slot, so coordination is of paramount importance.

“Coordination is key to sending only the non-zero data,” explained the team. “The aggregator globally determines the positions of non-zero values among [nodes] in a look-ahead fashion based on the next position metadata efficiently available at the [nodes] (which communicate it to the aggregator). This component differentiates OmniReduce from any related work.”

In testing OmniReduce against other existing collective libraries like NCCL and Gloo, while running six popular deep neural net models like BERT and ResNet152, the team of researchers found that OmniReduce performed well, boosting training times by up to 8.2 times.

In testing OmniReduce against other existing collective libraries like NCCL and Gloo, while running six popular deep neural net models like BERT and ResNet152, the team of researchers found that OmniReduce performed well, boosting training times by up to 8.2 times.

In testing OmniReduce against other existing collective libraries like NCCL and Gloo, while running six popular deep neural net models like BERT and ResNet152, the team of researchers found that OmniReduce performed well, boosting training times by up to 8.2 times. They also found that OmniReduce was effective for large-DNN distributed training jobs with multi-GPU servers.

In addition, the team ran tests pitting OmniReduce against other state-of-the-art sparse collective communication solutions running on TCP/IP and RDMA networks like AllReduce, SparCML, and Parallax, and discovered that OmniReduce outperformed these competitors by 3.5 to 16 times.

The results of pitting OmniReduce against other state-of-the-art sparse collective communication solutions running on TCP/IP and RDMA networks like AllReduce, SparCML, and Parallax,

The results of pitting OmniReduce against other state-of-the-art sparse collective communication solutions running on TCP/IP and RDMA networks like AllReduce, SparCML, and Parallax,

“The performance benefit of OmniReduce is two-fold,” said the team. “First, OmniReduce is much more scalable, and both speedup factors grow with the number of [nodes] because OmniReduce’s time does not depend on the number of [nodes]. This speedup is fundamental and exists even with a dense input. Second, in contrast to ring AllReduce, OmniReduce only sends non-zero elements, which reduces the time proportionally.”

The team is now working to adapt OmniReduce to run on programmable switches utilizing in-network computation to further boost performance. So far, OmniReduce has been adopted for training large-scale workloads at Meituan, a huge shopping and on-demand delivery platform based in China.

Group Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.