Starchart: Hardware and Software Optimization Using Recursive Partitioning Regression Trees
Princeton University
22nd International Conference on Parallel Architectures and Compilation Techniques (PACT 2013), 2013
@article{jia2013starchart,
title={Starchart: Hardware and Software Optimization Using Recursive Partitioning Regression Trees},
author={Jia, Wenhao and Shaw, Kelly A. and Martonosi, Margaret},
year={2013}
}
Graphics processing units (GPUs) are in increasingly wide use, but significant hurdles lie in selecting the appropriate algorithms, runtime parameter settings, and hardware configurations to achieve power and performance goals with them. Exploring hardware and software choices requires time-consuming simulations or extensive real-system measurements. While some auto-tuning support has been proposed, it is often narrow in scope and heuristic in operation. This paper proposes and evaluates a statistical analysis technique, Starchart, that partitions the GPU hardware/software tuning space by automatically discerning important inflection points in design parameter values. Unlike prior methods, Starchart can identify the best parameter choices within different regions of the space. Our tool is efficient-evaluating at most 0.3% of the tuning space, and often much less-and is robust enough to analyze highly variable real-system measurements, not just simulation. In one case study, we use it to automatically find platform-specific parameter settings that are 6.3x faster (for AMD) and 1.3x faster (for NVIDIA) than a single general setting. We also show how power-optimized parameter settings can save 47W (26% of total GPU power) with little performance loss. Overall, Starchart can serve as a foundation for a range of GPU compiler optimizations, auto-tuners, and programmer tools. Furthermore, because Starchart does not rely on specific GPU features, we expect it to be useful for broader CPU/GPU studies as well.
July 17, 2013 by hgpu