27158

Exploring Thread Coarsening on FPGA

Mostafa Eghbali Zarch, Reece Neff, Michela Becchi
North Carolina State University, Raleigh, NC, USA
arXiv:2208.11890 [cs.DC], (25 Aug 2022)

@inproceedings{Zarch_2021,

   doi={10.1109/hipc53243.2021.00062},

   url={https://doi.org/10.1109%2Fhipc53243.2021.00062},

   year={2021},

   month={dec},

   publisher={IEEE},

   author={Mostafa Eghbali Zarch and Reece Neff and Michela Becchi},

   title={Exploring Thread Coarsening on {FPGA}},

   booktitle={2021 {IEEE} 28th International Conference on High Performance Computing, Data, and Analytics ({HiPC})}

}

Download Download (PDF)   View View   Source Source   

675

views

Over the past few years, there has been an increased interest in including FPGAs in data centers and high-performance computing clusters along with GPUs and other accelerators. As a result, it has become increasingly important to have a unified, high-level programming interface for CPUs, GPUs and FPGAs. This has led to the development of compiler toolchains to deploy OpenCL code on FPGA. However, the fundamental architectural differences between GPUs and FPGAs have led to performance portability issues: it has been shown that OpenCL code optimized for GPU does not necessarily map well to FPGA, often requiring manual optimizations to improve performance. In this paper, we explore the use of thread coarsening – a compiler technique that consolidates the work of multiple threads into a single thread – on OpenCL code running on FPGA. While this optimization has been explored on CPU and GPU, the architectural features of FPGAs and the nature of the parallelism they offer lead to different performance considerations, making an analysis of thread coarsening on FPGA worthwhile. Our evaluation, performed on our microbenchmarks and on a set of applications from open-source benchmark suites, shows that thread coarsening can yield performance benefits (up to 3-4x speedups) to OpenCL code running on FPGA at a limited resource utilization cost.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: