30219

High Performance GPU Implementation of KNN Algorithm: A Review

Pooja Bidye, Pradnya Borkar, Nitin Rakesh
Symbiosis International (Deemed University), Pune 412115, India
MethodsX, 103633, 2025

@article{bidye2025high,

   title={High Performance GPU Implementation of KNN Algorithm: A Review},

   author={Bidye, Pooja and Borkar, Pradnya and Rakesh, Nitin},

   journal={MethodsX},

   pages={103633},

   year={2025},

   publisher={Elsevier}

}

Download Download (PDF)   View View   Source Source   

243

views

With large volumes of complex data generated by different applications, Machine Learning (ML) algorithms alone may not yield significant performance benefits on a single or multi-core CPU. Applying optimization techniques to these ML algorithms in a High-Performance Computing (HPC) environment can give considerable speedups for high-dimensional datasets. One of the most widely used classification algorithms, with applications in various domains, is the K-Nearest Neighbor (KNN). Despite its simplicity, KNN poses several challenges while handling high-dimensional data. However, the algorithm’s inherent nature presents an opportunity for parallelization. This paper reviews the optimization techniques employed by several researchers to accelerate the KNN algorithm on a GPU platform. The study reveals that techniques such as coalesced-memory access, tiling with shared memory, chunking, data segmentation, and pivot-based partitioning significantly contribute towards speeding up the KNN algorithm to leverage the GPU capabilities. The algorithms reviewed have performed exceptionally well on high-dimensional data with speedups up to 750x for a dual-GPU platform and up to 1840x for a multi-GPU platform. This study serves as a valuable resource for researchers examining KNN acceleration in high-performance computing environments and its applications in various fields.
No votes yet.
Please wait...

You must be logged in to post a comment.

* * *

* * *

HGPU group © 2010-2025 hgpu.org

All rights belong to the respective authors

Contact us: