11401

Work-Efficient Parallel GPU Methods for Single-Source Shortest Paths

Andrew Davidson, Sean Baxter, Michael Garland, John D. Owens
University of California, Davis
International Parallel and Distributed Processing Symposium, 2014

@inproceedings{davidson2014work,

   title={Work-Efficient Parallel GPU Methods for Single-Source Shortest Paths},

   author={Davidson, Andrew Alan and Baxter, Sean and Garland, Michael and Owens, John D},

   booktitle={International Parallel and Distributed Processing Symposium},

   volume={28},

   year={2014}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

2652

views

Finding the shortest paths from a single source to all other vertices is a fundamental method used in a variety of higher-level graph algorithms. We present three parallel-friendly and work-efficient methods to solve this Single-Source Shortest Paths (SSSP) problem: Workfront Sweep, Near-Far and Bucketing. These methods choose different approaches to balance the tradeoff between saving work and organizational overhead. In practice, all of these methods do much less work than traditional Bellman-Ford methods, while adding only a modest amount of extra work over serial methods. These methods are designed to have a sufficient parallel workload to fill modern massively-parallel machines, and select reorganizational schemes that map well to these architectures. We show that in general our Near-Far method has the highest performance on modern GPUs, outperforming other parallel methods. We also explore a variety of parallel load-balanced graph traversal strategies and apply them towards our SSSP solver. Our work-saving methods always outperform a traditional GPU Bellman-Ford implementation, achieving rates up to 14x higher on low-degree graphs and 340x higher on scale-free graphs. We also see significant speedups (20-60x) when compared against a serial implementation on graphs with adequately high degree.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: