GPU Computing: Programming a Massively Parallel Processor
NVIDIA, GPU-Compute Software Manager
In International Symposium on Code Generation and Optimization (CGO’07) (March 2007), pp. 17-17
@conference{buck2007gpu,
title={GPU computing: Programming a massively parallel processor},
author={Buck, I.},
booktitle={Proceedings of the International Symposium on Code Generation and Optimization},
pages={17},
isbn={0769527647},
year={2007},
organization={IEEE Computer Society}
}
Many researchers have observed that general purpose computing with programmable graphics hardware (GPUs) has shown promise to solve many of the world’s compute intensive problems, many orders of magnitude faster the conventional CPUs. The challenge has been working within the constraints of a graphics programming environment and limited language support to leverage this huge performance potential. GPU computing with CUDA is a new approach to computing where hundreds of on-chip processor cores simultaneously communicate and cooperate to solve complex computing problems, transforming the GPU into a massively parallel processor. The NVIDIA C-compiler for the GPU provides a complete development environment that gives developers the tools they need to solve new problems in computation-intensive applications such as product design, data analysis, technical computing, and game physics. In this talk, I will provide a description of how CUDA can solve compute intensive problems and highlight the challenges when compiling parallel programs for GPUs including the differences between graphics shaders vs. CUDA applications.
December 4, 2010 by hgpu