28937

UniFL: Accelerating Federated Learning Using Heterogeneous Hardware Under a Unified Framework

Biyao Che, Zixiao Wang, Ying Chen, Liang Guo, Yuan Liu, Yuan Tian, Jizhuang Zhao
China Telecom Research Institute, Beijing 102209, China
IEEE Access, Volume 12, 2023

@article{che2023unifl,

   title={UniFL: Accelerating federated learning using heterogeneous hardware under a unified framework},

   author={Che, Biyao and Wang, Zixiao and Chen, Ying and Guo, Liang and Liu, Yuan and Tian, Yuan and Zhao, Jizhuang},

   journal={IEEE Access},

   year={2023},

   publisher={IEEE}

}

Download Download (PDF)   View View   Source Source   

700

views

Federated learning (FL) is now considered a critical method for breaking down data silos. However, data encryption can significantly increase computing time, limiting its large-scale deployment. While hardware acceleration can be an effective solution, existing research has largely focused on a single hardware type, which hinders the acceleration of FL across the various heterogeneous hardware of the participants. In light of this challenge, this paper proposes a novel FL acceleration framework that supports diverse types of hardware. Firstly, we conduct an analysis of the key elements of FL to clarify our accelerator design goals. Secondly, a unified acceleration framework is proposed, which divides FL into four layers, providing a basis for the compatibility and implementation of heterogeneous hardware acceleration. After that, based on the physical properties of three mainstream acceleration hardware, i.e., GPU, ASIC and FPGA, the architecture design of corresponding heterogeneous accelerators under the framework is detailed. Finally, we validate the effectiveness of the proposed heterogeneous hardware acceleration framework through experiments. For specific algorithms, our implementation achieves a state of the art acceleration effect compared to previous work. For the end-to-end acceleration performance, we gain 12×, 7.7× and 2.2× improvement on GPU, ASIC and FPGA respectively, compared to CPU in large-scale vertical linear regression training tasks.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: