18617

SuperNeurons: FFT-based Gradient Sparsification in the Distributed Training of Deep Neural Networks

Linnan Wang, Wei Wu, Yiyang Zhao, Junyu Zhang, Hang Liu, George Bosilca, Jack Dongarra, Maurice Herlihy, Rodrigo Fonseca
Brown University
arXiv:1811.08596 [cs.DC], (21 Nov 2018)

@article{wang2018superneurons,

   title={SuperNeurons: FFT-based Gradient Sparsification in the Distributed Training of Deep Neural Networks},

   author={Wang, Linnan and Wu, Wei and Zhao, Yiyang and Zhang, Junyu and Liu, Hang and Bosilca, George and Dongarra, Jack and Herlihy, Maurice and Fonseca, Rodrigo},

   year={2018},

   month={nov},

   archivePrefix={"arXiv"},

   primaryClass={cs.DC}

}

The performance and efficiency of distributed training of Deep Neural Networks highly depend on the performance of gradient averaging among all participating nodes, which is bounded by the communication between nodes. There are two major strategies to reduce communication overhead: one is to hide communication by overlapping it with computation, and the other is to reduce message sizes. The first solution works well for linear neural architectures, but latest networks such as ResNet and Inception offer limited opportunity for this overlapping. Therefore, researchers have paid more attention to minimizing communication. In this paper, we present a novel gradient compression framework derived from insights of real gradient distributions, and which strikes a balance between compression ratio, accuracy, and computational overhead. Our framework has two major novel components: sparsification of gradients in the frequency domain, and a range-based floating point representation to quantize and further compress gradients frequencies. Both components are dynamic, with tunable parameters that achieve different compression ratio based on the accuracy requirement and systems’ platforms, and achieve very high throughput on GPUs. We prove that our techniques guarantee the convergence with a diminishing compression ratio. Our experiments show that the proposed compression framework effectively improves the scalability of most popular neural networks on a 32 GPU cluster to the baseline of no compression, without compromising the accuracy and convergence speed.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: