3499

Record Setting Software Implementation of DES Using CUDA

G. Agosta, A. Barenghi, F. De Santis, G. Pelosi
Dipartimento di Elettronica e Informazione, Politecnico di Milano, 20133 Milano – Italy
Seventh International Conference on Information Technology: New Generations (ITNG), 2010

@conference{agosta2010record,

   title={Record Setting Software Implementation of DES Using CUDA},

   author={Agosta, G. and Barenghi, A. and De Santis, F. and Pelosi, G.},

   booktitle={2010 Seventh International Conference on Information Technology},

   pages={748–755},

   year={2010},

   organization={IEEE}

}

Download Download (PDF)   View View   Source Source   

539

views

The increase in computational power of off-the-shelf hardware offers more and more advantageous tradeoffs among efficiency, cost and availability, thus enhancing the feasibility of of cryptanalytic attacks aiming to lower the security of widely used cryptosystems. In this paper we illustrate an GPU-based software implementation of the most efficent variant of Data Encryption Standard (DES), showing the performance of a software breaker which effectively exploits the multi-core Nvidia GT200 graphic architecture. The key point is to assess how well the structure of a symmetric key cipher can fit the GPU programming model and the single instruction multiple data architectural parallelism. The proposed breaker outperforms the fastest general purpose CPU-based implementations by an order of magnitude, and, due to the vast availability of GPUs on the market, the speedup translates into a sound improvement in the cost efficiency of the attack. As opposed to solutions based either on application specific or reconfigurable hardware, the proposed implementation does not require any specific technical knowledge from the attacker in order to be successfully built, once our implementation is available. This turns out in a better cost-availability tradeoff and minimizes the required setup time for such an attack to be mounted.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: