2033

GPU-ABiSort: Optimal Parallel Sorting on Stream Architectures

A. Gres, G. Zachmann
Institute of Computer Science II Rhein. Friedr.-Wilh.-Universitat Bonn, Bonn, German
Proceedings of the 20th IEEE International Parallel and Distributed Processing Symposium, pp. 1-10, 2006

@conference{gres2006gpu,

   title={GPU-ABiSort: Optimal parallel sorting on stream architectures},

   author={Gre{ss}, A. and Zachmann, G.},

   booktitle={Proceedings of the 20th IEEE International Parallel and Distributed Processing Symposium},

   pages={45},

   year={2006},

   organization={Citeseer}

}

Download Download (PDF)   View View   Source Source   

3704

views

In this paper, we present a novel approach for parallel sorting on stream processing architectures. It is based on adaptive bitonic sorting. For sorting n values utilizing p stream processor units, this approach achieves the optimal time complexity O((n log n)/p). While this makes our approach competitive with common sequential sorting algorithms not only from a theoretical viewpoint, it is also very fast from a practical viewpoint. This is achieved by using efficient linear stream memory accesses (and by combining the optimal time approach with algorithms optimized for small input sequences). We present an implementation on modern programmable graphics hardware (GPUs). On GPUs, our optimal parallel sorting approach has shown to be remarkably faster than sequential sorting on the CPU, and it is also faster than previous non-optimal sorting approaches on the GPU for sufficiently large input sequences. Because of the excellent scalability of our algorithm with the number of stream processor units p (up to n/log 2 n or even n/log n units, depending on the stream architecture), our approach profits heavily from the trend of increasing number of fragment processor units on GPUs, so that we can expect further speed improvement with upcoming GPU generations.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: