6163

PEPSC: A Power-Efficient Processor for Scientific Computing

Ganesh Dasika, Ankit Sethia, Trevor Mudge, Scott Mahlke
ARM R&D, Austin, TX
20th Intl. Conference on Parallel Architectures and Compilation Techniques (PACT), 2011

@article{dasika2011pepsc,

   title={PEPSC: A Power-Efficient Processor for Scientific Computing},

   author={Dasika, G. and Sethia, A. and Mudge, T. and Mahlke, S.},

   year={2011}

}

Download Download (PDF)   View View   Source Source   

1562

views

The rapid advancements in the computational capabilities of the graphics processing unit (GPU) as well as the deployment of general programming models for these devices have made the vision of a desktop supercomputer a reality. It is now possible to assemble a system that provides several TFLOPs of performance on scientific applications for the cost of a highend laptop computer. While these devices have clearly changed the landscape of computing, there are two central problems that arise. First, GPUs are designed and optimized for graphics applications resulting in delivered performance that is far below peak for more general scientific and mathematical applications. Second, GPUs are power hungry devices that often consume 100-300 watts, which restricts the scalability of the solution and requires expensive cooling. To combat these challenges, this paper presents the PEPSC architecture – an architecture customized for the domain of data parallel scientific applications where powerefficiency is the central focus. PEPSC utilizes a combination of a two-dimensional single-instruction multiple-data (SIMD) datapath, an intelligent dynamic prefetching mechanism, and a configurable SIMD control approach to increase execution efficiency over conventional GPUs. A single PEPSC core has a peak performance of 120 GFLOPs while consuming 2W of power when executing modern scientific applications, which represents an increase in computation efficiency of more than 10X over existing GPUs.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: