Toward a Generic Hybrid CPU-GPU Parallelization of Divide-and-Conquer Algorithms

Alejandro Lopez-Ortiz, Alejandro Salinger, Robert Suderman
David R. Cheriton School of Computer Science, University of Waterloo, 200 University Ave West, Waterloo, ON, N2L 3G1, Canada
International Journal of Networking and Computing, Vol 4, No 1, pages 131-150, 2014


   title={Toward a Generic Hybrid CPU-GPU Parallelization of Divide-and-Conquer Algorithms},

   author={L{‘o}pez-Ortiz, Alejandro and Salinger, Alejandro and Suderman, Robert},

   journal={International Journal of Networking and Computing},






Download Download (PDF)   View View   Source Source   



In the last few years, the development of programming languages for general purpose computing on Graphic Processing Units (GPUs) has led to the design and implementation of fast parallel algorithms for this architecture for a large spectrum of applications. Given the streaming-processing characteristics of GPUs, most practical applications consist of tasks that admit highly data-parallel algorithms. Many problems, however, allow for task-parallel solutions or a combination of task and data-parallel algorithms. For these, a hybrid CPU-GPU parallel algorithm that combines the highly parallel stream-processing power of GPUs with the higher scalar power of multi-cores is likely to be superior. In this paper we describe a generic translation of any recursive sequential implementation of a divide-and-conquer algorithm into an implementation that benefits from running in parallel in both multi-cores and GPUs. This translation is generic in the sense that it requires little knowledge of the particular algorithm. We then present a schedule and work division scheme that adapts to the characteristics of each algorithm and the underlying architecture, efficiently balancing the workload between GPU and CPU. Our experiments show a 4.5x speedup over a single core recursive implementation, while demonstrating the accuracy and practicality of the approach.
No votes yet.
Please wait...

* * *

* * *

* * *

HGPU group © 2010-2022 hgpu.org

All rights belong to the respective authors

Contact us: