3793

Graphics hardware & GPU computing: past, present, and future

David Luebke
NVIDIA Corporation
In Proceedings of Graphics Interface 2009 (2009)

@conference{luebke2009graphics,

   title={Graphics hardware & GPU computing: past, present, and future},

   author={Luebke, D.},

   booktitle={Proceedings of Graphics Interface 2009},

   pages={6},

   year={2009},

   organization={Canadian Information Processing Society}

}

Source Source   

678

views

Modern GPUs have emerged as the world’s most successful parallel architecture. GPUs provide a level of massively parallel computation that was once the preserve of supercomputers like the MasPar and Connection Machine. For example, NVIDIA’s GeForce GTX 280 is a fully programmable, massively multithreaded chip with up to 240 cores, 30,720 threads and capable of performing up to a trillion operations per second. The raw computational horsepower of these chips has expanded their reach well beyond graphics. Today’s GPUs not only render video game frames, they also accelerate physics computations, video transcoding, image processing, astrophysics, protein folding, seismic exploration, computational finance, radioastronomy – the list goes on and on. Enabled by platforms like the CUDA architecture, which provides a scalable programming model, researchers across science and engineering are accelerating applications in their discipline by up to two orders of magnitude. These success stories, and the tremendous scientific and market opportunities they open up, imply a new and diverse set of workloads that in turn carry implications for the evolution of future GPU architectures. In this talk I will discuss the evolution of GPUs from fixed-function graphics accelerators to general-purpose massively parallel processors. I will briefly motivate GPU computing and explore the transition it represents in massively parallel computing: from the domain of supercomputers to that of commodity “manycore” hardware available to all. I will discuss the goals, implications, and key abstractions of the CUDA architecture. Finally I will close with a discussion of future workloads in games, highperformance computing, and consumer applications, and their implications for future GPU architectures.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: