Performance Evaluation of Parallel Count Sort using GPU Computing with CUDA

Neetu Faujdar, SatyaPrakash Ghrera
Jaypee University of Information Technology, Department of CSE, Waknaghat, Himachal Pradesh, India
Indian Journal of Science and Technology, Volume 9, Issue 15, 2016


   title={Performance Evaluation of Parallel Count Sort using GPU Computing with CUDA},

   author={Faujdar, Neetu and Ghrera, SatyaPrakash},

   journal={Indian Journal of Science and Technology},





Download Download (PDF)   View View   Source Source   



OBJECTIVE: Sorting is considered a very important application in many areas of computer science. Nowadays parallelization of sorting algorithms using GPU computing, on CUDA hardware is increasing rapidly. The objective behind using GPU computing is that the users can get, the more speedup of the algorithms. METHODS: In this paper, we have focused on count sort. It is very efficient sort with time complexity O(n). The problem with count sort is that, it is not recommended for larger sets of data because it depends on the range of key elements.In this paper this drawback has been taken for the research concern and we parallelized the count sort using GPU computing with CUDA. FINDINGS: We have measured the speedup achieved by the parallel count sort over sequential count sort. The sorting benchmark has been used to test and measure the performance of both the versions of count sort (parallel and sequential). The sorting benchmark has six types of test cases which are uniform, bucket, Gaussian, sorted, staggered and zero.In this paper, our finding is that we have tested the parallel and sequential count sort on a larger sets of data which vary from N=1000 to N=10000000. IMPROVEMENT: After testing, we have achieved 66 times more efficient results of the parallel count sort in the case of execution time using Gaussian test case. We found that the parallel count sort performs, the better experimental results over sequential in all the test cases.
Rating: 0.5/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: