24879

A Hybrid Parallelization Approach for Distributed and Scalable Deep Learning

Samson B. Akintoye, Liangxiu Han, Xin Zhang, Haoming Chen, Daoqiang Zhang
Department of Computing and Mathematics, Manchester Metropolitan University, UK
arXiv:2104.05035 [cs.DC], (11 Apr 2021)

@misc{akintoye2021hybrid,

   title={A Hybrid Parallelization Approach for Distributed and Scalable Deep Learning},

   author={Samson B. Akintoye and Liangxiu Han and Xin Zhang and Haoming Chen and Daoqiang Zhang},

   year={2021},

   eprint={2104.05035},

   archivePrefix={arXiv},

   primaryClass={cs.DC}

}

Download Download (PDF)   View View   Source Source   

1103

views

Recently, Deep Neural Networks (DNNs) have recorded great success in handling medical and other complex classification tasks. However, as the sizes of a DNN model and the available dataset increase, the training process becomes more complex and computationally intensive, which usually takes a longer time to complete. In this work, we have proposed a generic full end-to-end hybrid parallelization approach combining both model and data parallelism for efficiently distributed and scalable training of DNN models. We have also proposed a Genetic Algorithm based heuristic resources allocation mechanism (GABRA) for optimal distribution of partitions on the available GPUs for computing performance optimization. We have applied our proposed approach to a real use case based on 3D Residual Attention Deep Neural Network (3D-ResAttNet) for efficient Alzheimer Disease (AD) diagnosis on multiple GPUs. The experimental evaluation shows that the proposed approach is efficient and scalable, which achieves almost linear speedup with little or no differences in accuracy performance when compared with the existing non-parallel DNN models.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: