1838

GPU Computing: Programming a Massively Parallel Processor

Ian Buck
NVIDIA, GPU-Compute Software Manager
In International Symposium on Code Generation and Optimization (CGO’07) (March 2007), pp. 17-17

@conference{buck2007gpu,

   title={GPU computing: Programming a massively parallel processor},

   author={Buck, I.},

   booktitle={Proceedings of the International Symposium on Code Generation and Optimization},

   pages={17},

   isbn={0769527647},

   year={2007},

   organization={IEEE Computer Society}

}

Source Source   

1505

views

Many researchers have observed that general purpose computing with programmable graphics hardware (GPUs) has shown promise to solve many of the world’s compute intensive problems, many orders of magnitude faster the conventional CPUs. The challenge has been working within the constraints of a graphics programming environment and limited language support to leverage this huge performance potential. GPU computing with CUDA is a new approach to computing where hundreds of on-chip processor cores simultaneously communicate and cooperate to solve complex computing problems, transforming the GPU into a massively parallel processor. The NVIDIA C-compiler for the GPU provides a complete development environment that gives developers the tools they need to solve new problems in computation-intensive applications such as product design, data analysis, technical computing, and game physics. In this talk, I will provide a description of how CUDA can solve compute intensive problems and highlight the challenges when compiling parallel programs for GPUs including the differences between graphics shaders vs. CUDA applications.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: