3452

The optimization of parallel Smith-Waterman sequence alignment using on-chip memory of GPGPU

Qian Zhang, Hong An, Gu Liu, Wenting Han, Ping Yao, Mu Xu, Xiaoqiang Li
Dept. of Comput. Sci. & Technol., Univ. of Sci. & Technol. of China, Hefei, China
IEEE Fifth International Conference on Bio-Inspired Computing: Theories and Applications (BIC-TA), 2010

@conference{zhang2010optimization,

   title={The optimization of parallel Smith-Waterman sequence alignment using on-chip memory of GPGPU},

   author={Zhang, Q. and An, H. and Liu, G. and Han, W. and Yao, P. and Xu, M. and Li, X.},

   booktitle={Bio-Inspired Computing: Theories and Applications (BIC-TA), 2010 IEEE Fifth International Conference on},

   pages={844–850},

   organization={IEEE}

}

Source Source   

1506

views

Memory optimization is an important strategy to gain high performance for sequence alignment implemented by CUDA on GPGPU. Smith-Waterman (SW) algorithm is the most sensitive algorithm widely used for local sequence alignment but very time consuming. Although several parallel methods have been used in some studies and shown good performances, advantages of GPGPU memory hierarchy are still not fully exploited. This paper presents a new parallel method on GPGPU using on-chip memory more efficiently to optimize parallel Smith-Waterman sequence alignment presented by Gregory M. Striemer. To minimize the cost of data transfers, on-chip shared memory is used to store intermediate results. Constant memory is also used effectively in our implementation of parallel Smith-Waterman algorithm. Using these two kinds of on-chip memory decreases long latency memory access operations, and reduces demand for global memory when aligning longer sequences. The experimental results show 1.66x to 3.16x speedup over Gregory’s parallel SW on GPGPU in terms of execution time and 19.70x speedup on average and 22.43x speedup peak performance over serial SW in terms of clock cycles on our computer platform.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: