11133

Energy Auto-tuning using the Polyhedral Approach

Wei Wang, John Cavazos, Allan Porterfield
University of Delaware, 101 Smith Hall, Newark, DE 19702
4th International Workshop on Polyhedral Compilation Techniques (IMPACT 2014), 2014

@inproceedings{Wang2014impact,

   author={Wang, Wei and Cavazos, John and Porterfield, Allan},

   title={Energy Auto-tuning using the Polyhedral Approach},

   booktitle={Proceedings of the 4th International Workshop on Polyhedral Compilation Techniques},

   editor={Rajopadhye, Sanjay and Verdoolaege, Sven},

   year={2014},

   month={Jan},

   address={Vienna, Austria}

}

Download Download (PDF)   View View   Source Source   

1481

views

As the HPC community moves into the exascale computing era, application energy has become a big concern. Tuning for energy will be essential in the effort to overcome the limited power envelope. How is tuning for lower energy related to tuning for faster execution? Understanding that relationship can guide both performance and energy tuning for exascale. In this paper, a strong correlation is presented between the two that allows tuning for execution to be used as a proxy for energy tuning. We also show that polyhedral compilers can effectively tune a realistic application for both time and energy. For a large number of variants of the Polybench programs and LULESH energy consumption is strongly correlated with total execution time. Optimizations can increase the power and energy required between variants, but the variant with minimum execution time also has the lowest energy usage. The polyhedral framework was also used to optimize a 2D cardiac wave propagation simulation application. Various loop optimizations including fusion, tiling, vectorization, and auto-parallelization, achieved a 20% speedup over the baseline OpenMP implementation, with an equivalent reduction in energy on an Intel Sandy Bridge system. On an Intel Xeon Phi system, improvements as high as 21% in execution time and 19% reduction in energy are obtained.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: