CuPP – A framework for easy CUDA integration
Research Group Programming Languages / Methodologies, Universitat at Kassel Kassel, Germany
In IPDPS ’09: Proceedings of the 2009 IEEE International Symposium on Parallel&Distributed Processing (2009), pp. 1-8.
@conference{breitbart2009cupp,
title={CuPP-A framework for easy CUDA integration},
author={Breitbart, J.},
booktitle={Parallel & Distributed Processing, 2009. IPDPS 2009. IEEE International Symposium on},
pages={1–8},
issn={1530-2075},
year={2009},
organization={IEEE}
}
This paper reports on CuPP, our newly developed C++ framework designed to ease integration of NVIDIAs GPGPU system CUDA into existing C++ applications. CuPP provides interfaces to reoccurring tasks that are easier to use than the standard CUDA interfaces. In this paper we concentrate on memory management and related data structures. CuPP offers both a low level interface – mostly consisting of smartpointers and memory allocation functions for GPU memory – and a high level interface offering a C++ STL vector wrapper and the so-called type transformations. The wrapper can be used by both device and host to automatically keep data in sync. The type transformations allow developers to write their own data structures offering the same functionality as the CuPP vector, in case a vector does not conform to the need of the application. Furthermore the type transformations offer a way to have two different representations for the same data at host and device, respectively. We demonstrate the benefits of using CuPP by integrating it into an example application, the open-source steering library OpenSteer. In particular, for this application we develop a uniform grid data structure to solve the k-nearest neighbor problem that deploys the type transformations. The paper finishes with a brief outline of another CUDA application, the Einstein@Home client, which also requires data structure redesign and thus may benefit from the type transformations and future work on CuPP.
November 4, 2010 by hgpu