MODESTO: Data-centric Analytic Optimization of Complex Stencil Programs on Heterogeneous Architectures
Department of Computer Science, ETH Zurich
Proceedings of the 29th ACM on International Conference on Supercomputing (ICS ’15), 2015
@inproceedings{Gysi:2015:MDA:2751205.2751223,
author={Gysi, Tobias and Grosser, Tobias and Hoefler, Torsten},
title={MODESTO: Data-centric Analytic Optimization of Complex Stencil Programs on Heterogeneous Architectures},
booktitle={Proceedings of the 29th ACM on International Conference on Supercomputing},
series={ICS ’15},
year={2015},
isbn={978-1-4503-3559-1},
location={Newport Beach, California, USA},
pages={177–186},
numpages={10},
url={http://doi.acm.org/10.1145/2751205.2751223},
doi={10.1145/2751205.2751223},
acmid={2751223},
publisher={ACM},
address={New York, NY, USA},
keywords={fusion, heterogeneous systems, performance model, stencil, tiling}
}
Code transformations, such as loop tiling and loop fusion, are of key importance for the efficient implementation of stencil computations. However, their direct application to a large code base is costly and severely impacts program maintainability. While recently introduced domain-specific languages facilitate the application of such transformations, they typically still require manual tuning or auto-tuning techniques to select the transformations that yield optimal performance. In this paper, we introduce MODESTO, a model-driven stencil optimization framework, that for a stencil program suggests program transformations optimized for a given target architecture. Initially, we review and categorize data locality transformations for stencil programs and introduce a stencil algebra that allows the expression and enumeration of different stencil program implementation variants. Combining this algebra with a compile-time performance model, we show how to automatically tune stencil programs. We use our framework to model the STELLA library and optimize kernels used by the COSMO atmospheric model on multi-core and hybrid CPU-GPU architectures. Compared to naive and expert-tuned variants, the automatically tuned kernels attain a 2.0-3.1x and a 1.0-1.8x speedup respectively.
June 3, 2016 by gysit