A Study of CUDA Acceleration and Impact of Data Transfer Overhead in Heterogeneous Environment
The University of Texas at San Antonio, USA
The 7th International Workshop on Unique Chips and Systems (UCAS-7), 2012
@inproceedings{ahmed2012study,
title={A Study of CUDA Acceleration and Impact of Data Transfer Overhead in Heterogeneous Environment},
author={Ahmed, F. and Quirem, S. and Shin, B.J. and Son, D.J. and Woo, Y.C. and Lee, B.K. and Choi, W.},
booktitle={Workshop on Unique Chips and Systems UCAS-7},
pages={16},
year={2012}
}
Along with the introduction of many-core GPUs, there is widespread interest in using GPUs to accelerate non-graphics applications such as energy, bioinformatics, finance and several research areas. With a wide range of data sizes where the CPU has greater performance, it would be important that CUDA enabled programs properly select when to and not to utilize the GPU for acceleration. Algorithms that use dynamic programming like P7Viterbi algorithm of HMMER 3.0 (genetic application) show high parallelism in its code. Based on performance hotspot analysis, these parallel features were exploited through the use of CUDA and a GPGPU. The CUDA implementation of this algorithm being performed on the Tesla C1060 enabled a 10-15X speedup depending on the number of queries. In this paper, we focus on accelerating HMMER 3.0 – one of the genetic applications with GPUs as co-processors. Also we investigate the potential performance bottleneck in GPU-CPU environment with blowfish – a security application. Based on workload characterization and bottleneck analysis, we provide optimization methodologies to remove the bottleneck.
March 9, 2012 by hgpu