A Program Behavior Study of Block Cryptography Algorithms on GPGPU

Gu Liu, Hong An, Wenting Han, Guang Xu, Ping Yao, Mu Xu, Xiurui Hao, Yaobin Wang
Dept. of Comput. Sci. & Technol., Univ. of Sci. & Technol. of China, Hefei, China
Fourth International Conference on Frontier of Computer Science and Technology, 2009. FCST ’09


   title={A Program Behavior Study of Block Cryptography Algorithms on GPGPU},

   author={Liu, G. and An, H. and Han, W. and Xu, G. and Yao, P. and Xu, M. and Hao, X. and Wang, Y.},

   booktitle={Frontier of Computer Science and Technology, 2009. FCST’09. Fourth International Conference on},





Download Download (PDF)   View View   Source Source   



Recently many studies have been made to map cryptography algorithms onto graphics processors (GPU), and gained great performances. This paper does not focus on the performance of a specific program exploited by using all kinds of optimization methods algorithmically, but the intrinsic reason which lies in GPU architectural features for this performance improvement. Thus we present a study of several block encryption algorithms(AES, TRI-DES, RC5, TWOFISH and the chained block cipher formed by their combinations) processing on GPU using CUDA. We introduce our CUDA implementations, and investigate the program behavioral characteristics and their impacts on the performance in four aspects. We find that the number of threads used by a CUDA program can affect the overall performance fundamentally. Many block encryption algorithms can benefit from the shared memory if the capacity is large enough to hold the lookup tables. The data stored in device memory should be organized purposely to avoid performance degradation. Besides, the communication between host and device may turn out to be the bottleneck of a program. Through these analyses we hope to find out an effective way to optimize a CUDA program, as well as to reveal some desirable architectural features to support block encryption applications better.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: