Parallel Computing Experiences with CUDA
NVIDIA
Micro, IEEE In Micro, IEEE, Vol. 28, No. 4. (19 September 2008), pp. 13-27.
@article{garland2008parallel,
title={Parallel computing experiences with CUDA},
author={Garland, M. and Le Grand, S. and Nickolls, J. and Anderson, J. and Hardwick, J. and Morton, S. and Phillips, E. and Zhang, Y. and Volkov, V.},
journal={Micro, IEEE},
volume={28},
number={4},
pages={13–27},
issn={0272-1732},
year={2008},
publisher={IEEE}
}
The CUDA programming model provides a straightforward means of describing inherently parallel computations, and NVIDIA’s Tesla GPU architecture delivers high computational throughput on massively parallel problems. This article surveys experiences gained in applying CUDA to a diverse set of problems and the parallel speedups over sequential codes running on traditional CPU architectures attained by executing key computations on the GPU.
November 4, 2010 by hgpu
Your response
You must be logged in to post a comment.