SWIFOLD: Smith-Waterman implementation on FPGA with OpenCL for long DNA sequences

Enzo Rucci, Carlos Garcia, Guillermo Botella, Armando De Giusti, Marcelo Naiouf, Manuel Prieto-Matias
III-LIDI, CONICET, Facultad de Informática, Universidad Nacional de La Plata, 1900 La Plata (Buenos Aires), Argentina
BMC Systems Biology, 12, 2018


   title={SWIFOLD: Smith-Waterman implementation on FPGA with OpenCL for long DNA sequences},

   author={Rucci, Enzo and Garcia, Carlos and Botella, Guillermo and De Giusti, Armando and Naiouf, Marcelo and Prieto-Matias, Manuel},

   journal={BMC Systems Biology},





   publisher={BioMed Central}


BACKGROUND: The Smith-Waterman (SW) algorithm is the best choice for searching similar regions between two DNA or protein sequences. However, it may become impracticable in some contexts due to its high computational demands. Consequently, the computer science community has focused on the use of modern parallel architectures such as Graphics Processing Units (GPUs), Xeon Phi accelerators and Field Programmable Gate Arrays (FGPAs) to speed up large-scale workloads. RESULTS: This paper presents and evaluates SWIFOLD: a Smith-Waterman parallel Implementation on FPGA with OpenCL for Long DNA sequences. First, we evaluate its performance and resource usage for different kernel configurations. Next, we carry out a performance comparison between our tool and other state-of-the-art implementations considering three different datasets. SWIFOLD offers the best average performance for small and medium test sets, achieving a performance that is independent of input size and sequence similarity. In addition, SWIFOLD provides competitive performance rates in comparison with GPU-based implementations on the latest GPU generation for the large dataset. CONCLUSIONS: The results suggest that SWIFOLD can be a serious contender for accelerating the SW alignment of DNA sequences of unrestricted size in an affordable way reaching on average 125 GCUPS and almost a peak of 270 GCUPS.
No votes yet.
Please wait...

* * *

* * *

* * *

HGPU group © 2010-2022 hgpu.org

All rights belong to the respective authors

Contact us: