1675

Performance Analysis of IBM Cell Broadband Engine on Sequence Alignment

Yang Song, Gregory M. Striemer, Ali Akoglu
Department of Electrical and Computer Engineering, The University of Arizona, Tucson, Arizona USA 85721
Adaptive Hardware and Systems, NASA/ESA Conference on, Vol. 0 (2009), pp. 439-446

@conference{song2009performance,

   title={Performance Analysis of IBM Cell Broadband Engine on Sequence Alignment},

   author={Song, Y. and Striemer, G.M. and Akoglu, A.},

   booktitle={Adaptive Hardware and Systems, 2009. AHS 2009. NASA/ESA Conference on},

   pages={439–446},

   year={2009},

   organization={IEEE}

}

Download Download (PDF)   View View   Source Source   

1883

views

The Smith-Waterman (SW) algorithm is the most accurate sequence alignment approach used by computational biologists for DNA matching. However it’s computational complexity makes SW impractical to use in clinical environment compared to much faster but less accurate sequence alignment technique such as BLAST. High performance computing community is examining alternative multi core architectures such as IBM Cell Broadband Engine (BE) and Graphics Processing Units (GPUs) that address the limitations of modern cache based designs. In this paper we investigate the performance of IBM Cell BE architecture in the context of SW. We present an analysis on architectural features of the Cell BE, study the architecture’s fitness for accelerating sequence alignment based on its parallel processing power, interconnect structure and communication protocols among the processing cores. We then evaluate the performance of Cell BE against the state of art implementation of SW on NVIDIA’s Tesla GPU. Results show that based on the memory architecture of the SW algorithm, Cell BE performs much better than Tesla GPU in terms of both cycle count and execution time metrics. Compared to purely serial implementation, in terms of cycle count, while state of the art GPU implementation delivers 15x speedup, our solution achieves 64x speedup.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: