1151

Parallel Computing Experiences with CUDA

M. Garland, S. Le Grand, J. Nickolls, J. Anderson, J. Hardwick, S. Morton, E. Phillips, Yao Zhang, V. Volkov
NVIDIA
Micro, IEEE In Micro, IEEE, Vol. 28, No. 4. (19 September 2008), pp. 13-27.

@article{garland2008parallel,

   title={Parallel computing experiences with CUDA},

   author={Garland, M. and Le Grand, S. and Nickolls, J. and Anderson, J. and Hardwick, J. and Morton, S. and Phillips, E. and Zhang, Y. and Volkov, V.},

   journal={Micro, IEEE},

   volume={28},

   number={4},

   pages={13–27},

   issn={0272-1732},

   year={2008},

   publisher={IEEE}

}

Download Download (PDF)   View View   Source Source   

1608

views

The CUDA programming model provides a straightforward means of describing inherently parallel computations, and NVIDIA’s Tesla GPU architecture delivers high computational throughput on massively parallel problems. This article surveys experiences gained in applying CUDA to a diverse set of problems and the parallel speedups over sequential codes running on traditional CPU architectures attained by executing key computations on the GPU.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: