14449

Locality Aware Work-Stealing Based Scheduling in Hybrid CPU-GPU

A. Tarun Beri, B. Sorav Bansal, C. Subodh Kumar
Indian Institute of Technology Delhi, New Delhi, India
The 2015 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA’15), 2015

@article{beri2015locality,

   title={Locality Aware Work-Stealing Based Scheduling in Hybrid CPU-GPU},

   author={Beri, A. Tarun and Bansal, B. Sorav and Kumar, C. Subodh},

   year={2015}

}

Download Download (PDF)   View View   Source Source   

2043

views

We study work-stealing based scheduling on a cluster of nodes with CPUs and GPUs. In particular, we evaluate locality aware scheduling in the context of distributed shared memory style programming, where the user is oblivious to data placement. Our runtime maintains a distributed map of data resident on various nodes and uses it to estimate the affinity of work to different nodes to guide scheduling. We propose heuristics for incorporating locality in the stealing decision and compare its performance with a locality oblivious scheduler. In particular, we explore two heuristics that focus on minimizing the cost of fetching data that is non-local. These heuristics respectively minimize the number of remote data transfer events, and the number of remote virtual memory pages fetched. Finally, we also study the impact of different placements of the initial input, like block cyclic, random and centralized, on the scheduler. We implement and evaluate these schedulers within Unicorn, a heterogenous framework that decomposes bulk synchronous computations over a cluster of nodes. Compared to a locality oblivious scheduler, the average observed overhead of our techniques is less than 8%. We show that even with this overhead, average performance gain is between 10.35% and 10.6% in LU decomposition of a one billion element matrix and between 12.74% and 14.55% in multiplication of two square matrices of one billion elements each on a 10-node cluster with 120 CPUs and 20 GPUs.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: