Architecture-Adaptive Code Variant Tuning

Saurav Muralidharan, Amit Roy, Mary Hall, Michael Garland, Piyush Rai
University of Utah
21st International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS ’16), 2016


   title={Architecture-Adaptive Code Variant Tuning},

   author={Muralidharan, Saurav and Roy, Amit and Hall, Mary and Garland, Michael and Rai, Piyush},

   booktitle={Proceedings of the Twenty-First International Conference on Architectural Support for Programming Languages and Operating Systems},





Code variants represent alternative implementations of a computation, and are common in high-performance libraries and applications to facilitate selecting the most appropriate implementation for a specific execution context (target architecture and input dataset). Automating code variant selection typically relies on machine learning to construct a model during an offline learning phase that can be quickly queried at runtime once the execution context is known. In this paper, we define a new approach called architecture-adaptive code variant tuning, where the variant selection model is learned on a set of source architectures, and then used to predict variants on a new target architecture without having to repeat the training process. We pose this as a multi-task learning problem, where each source architecture corresponds to a task; we use device features in the construction of the variant selection model. This work explores the effectiveness of multi-task learning and the impact of different strategies for device feature selection. We evaluate our approach on a set of benchmarks and a collection of six NVIDIA GPU architectures from three distinct generations. We achieve performance results that are mostly comparable to the previous approach of tuning for a single GPU architecture without having to repeat the learning phase.
Rating: 1.5/5. From 2 votes.
Please wait...

Recent source codes

* * *

* * *

HGPU group © 2010-2019 hgpu.org

All rights belong to the respective authors

Contact us: