26115

A Unified FPGA Virtualization Framework for General-Purpose Deep Neural Networks in the Cloud

Shulin Zeng, Guohao Dai, Hanbo Sun, Jun Liu, Shiyao Li, Guangjun Ge, Kai Zhong, Kaiyuan Guo, Yu Wang, Huazhong Yang
Tsinghua University
ACM Transactions on Reconfigurable Technology and Systems, Volume 15, Issue 3, 2022

@article{zeng2021unified,

   title={A Unified FPGA Virtualization Framework for General-Purpose Deep Neural Networks in the Cloud},

   author={Zeng, Shulin and Dai, Guohao and Sun, Hanbo and Liu, Jun and Li, Shiyao and Ge, Guangjun and Zhong, Kai and Guo, Kaiyuan and Wang, Yu and Yang, Huazhong},

   journal={ACM Transactions on Reconfigurable Technology and Systems (TRETS)},

   volume={15},

   number={3},

   pages={1–31},

   year={2021},

   publisher={ACM New York, NY}

}

Download Download (PDF)   View View   Source Source   

155

views

INFerence-as-a-Service (INFaaS) has become a primary workload in the cloud. However, existing FPGA-based Deep Neural Network (DNN) accelerators are mainly optimized for the fastest speed of a single task, while the multi-tenancy of INFaaS has not been explored yet. As the demand for INFaaS keeps growing, simply increasing the number of FPGA-based DNN accelerators is not cost-effective, while merely sharing these single-task optimized DNN accelerators in a time-division multiplexing way could lead to poor isolation and high-performance loss for INFaaS. On the other hand, current cloud-based DNN accelerators have excessive compilation overhead, especially when scaling out to multi-FPGA systems for multi-tenant sharing, leading to unacceptable compilation costs for both offline deployment and online reconfiguration. Therefore, it is far from providing efficient and flexible FPGA virtualization for public and private cloud scenarios. Aiming to solve these problems, we propose a unified virtualization framework for general-purpose deep neural networks in the cloud, enabling multi-tenant sharing for both the Convolution Neural Network (CNN), and the Recurrent Neural Network (RNN) accelerators on a single FPGA. The isolation is enabled by introducing a two-level instruction dispatch module and a multi-core based hardware resources pool. Such designs provide isolated and runtime-programmable hardware resources, which further leads to performance isolation for multi-tenant sharing. On the other hand, to overcome the heavy re-compilation overheads, a tiling-based instruction frame package design and a two-stage static-dynamic compilation, are proposed. Only the lightweight runtime information is re-compiled with ~1 ms overhead, thus guaranteeing the private cloud’s performance. Finally, the extensive experimental results show that the proposed virtualized solutions achieve up to 3.12x and 6.18x higher throughput in the private cloud compared with the static CNN and RNN baseline designs, respectively.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2022 hgpu.org

All rights belong to the respective authors

Contact us: