2084

Scalable and highly parallel implementation of Smith-Waterman on graphics processing unit using CUDA

Ali Akoglu, Gregory Striemer
Department of Electrical and Computer Engineering, University of Arizona, 1230 E. Speedway Blvd., Tucson, Arizona 85721, USA
Cluster Computing, Volume 12, Number 3, 341-352

@article{akoglu2009scalable,

   title={Scalable and highly parallel implementation of Smith-Waterman on graphics processing unit using CUDA},

   author={Akoglu, A. and Striemer, G.M.},

   journal={Cluster Computing},

   volume={12},

   number={3},

   pages={341–352},

   issn={1386-7857},

   year={2009},

   publisher={Springer}

}

Source Source   

552

views

Program development environments have enabled graphics processing units (GPUs) to become an attractive high performance computing platform for the scientific community. A commonly posed problem in computational biology is protein database searching for functional similarities. The most accurate algorithm for sequence alignments is Smith-Waterman (SW). However, due to its computational complexity and rapidly increasing database sizes, the process becomes more and more time consuming making cluster based systems more desirable. Therefore, scalable and highly parallel methods are necessary to make SW a viable solution for life science researchers. In this paper we evaluate how SW fits onto the target GPU architecture by exploring ways to map the program architecture on the processor architecture. We develop new techniques to reduce the memory footprint of the application while exploiting the memory hierarchy of the GPU. With this implementation, GSW, we overcome the on chip memory size constraint, achieving 23x speedup compared to a serial implementation. Results show that as the query length increases our speedup almost stays stable indicating the solid scalability of our approach. Additionally this is a first of a kind implementation which purely runs on the GPU instead of a CPU-GPU integrated environment, making our design suitable for porting onto a cluster of GPUs.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: