3538

Power-Efficient Work Distribution Method for CPU-GPU Heterogeneous System

Guibin Wang, Xiaoguang Ren
Nat. Lab. for Parallel & Distrib. Process., Nat. Univ. of Defense Technol., Changsha, China
International Symposium on Parallel and Distributed Processing with Applications (ISPA), 2010

@conference{wang2010power,

   title={Power-Efficient Work Distribution Method for CPU-GPU Heterogeneous System},

   author={Wang, G. and Ren, X.},

   booktitle={International Symposium on Parallel and Distributed Processing with Applications},

   pages={122–129},

   year={2010},

   organization={IEEE}

}

Source Source   

501

views

As the system scales up continuously, the problem of power consumption for high performance computing (HPC) system becomes more severe. Heterogeneous system integrating two or more kinds of processors, could be better adapted to heterogeneity in applications and provide much higher energy efficiency in theory. Many studies have shown heterogeneous system is preferable on energy consumption to homogeneous system in a multi-programmed computing environment. However, how to exploit energy efficiency (Flops/Watt) of heterogeneous system for a single application or even for a single phase in an application has not been well studied. This paper proposes a power-efficient work distribution method for single application on a CPU-GPU heterogeneous system. The proposed method could coordinate inter-processor work distribution and per-processor’s frequency scaling to minimize energy consumption under a given scheduling length constraint. We conduct our experiment on a real system, which equips with a multi-core CPU and a multi-threaded GPU. Experimental results show that, with reasonably distributing work over CPU and GPU, the method achieves 14% reduction in energy consumption than static mappings for several typical benchmarks. We also demonstrate that our method could adapt to changes in scheduling length constraint and hardware configurations.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: