24199

Accelerate Scientific Deep Learning Models on Heterogeneous Computing Platform with FPGA

Chao Jiang, David Ojika, Sofia Vallecorsa, Thorsten Kurth, Prabhat, Bhavesh Patel, Herman Lam
SHREC: NSF Center for Space, High-Performance, and Resilient Computing, University of Florida
EPJ Web of Conferences 245, 09014, 2020

@inproceedings{jiang2020accelerate,

   title={Accelerate Scientific Deep Learning Models on Heterogeneous Computing Platform with FPGA},

   author={Jiang, Chao and Ojika, David and Vallecorsa, Sofia and Kurth, Thorsten and Patel, Bhavesh and Lam, Herman and others},

   booktitle={EPJ Web of Conferences},

   volume={245},

   pages={09014},

   year={2020},

   organization={EDP Sciences}

}

Download Download (PDF)   View View   Source Source   

1195

views

AI and deep learning are experiencing explosive growth in almost every domain involving analysis of big data. Deep learning using Deep Neural Networks (DNNs) has shown great promise for such scientific data analysis applications. However, traditional CPU-based sequential computing without special instructions can no longer meet the requirements of mission-critical applications, which are compute-intensive and require low latency and high throughput. Heterogeneous computing (HGC), with CPUs integrated with GPUs, FPGAs, and other science-targeted accelerators, offers unique capabilities to accelerate DNNs. Collaborating researchers at SHREC at the University of Florida, CERN Openlab, NERSC at Lawrence Berkeley National Lab, Dell EMC, and Intel are studying the application of heterogeneous computing (HGC) to scientific problems using DNN models. This paper focuses on the use of FPGAs to accelerate the inferencing stage of the HGC workflow. We present case studies and results in inferencing state-of-the-art DNN models for scientific data analysis, using Intel distribution of OpenVINO, running on an Intel Programmable Acceleration Card (PAC) equipped with an Arria 10 GX FPGA. Using the Intel Deep Learning Acceleration (DLA) development suite to optimize existing FPGA primitives and develop new ones, we were able accelerate the scientific DNN models under study with a speedup from 2.46x to 9.59x for a single Arria 10 FPGA against a single core (single thread) of a server-class Skylake CPU.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: