Fast Automatic Heuristic Construction Using Active Learning

William F. Ogilvie, Pavlos Petoumenos, Zheng Wang, Hugh Leather
School of Informatics, University of Edinburgh, UK
The 27th International Workshop on Languages and Compilers for Parallel Computing (LCPC), 2014


   title={Fast automatic heuristic construction using active learning},

   author={Ogilvie, William and Petoumenos, Pavlos and Wang, Zheng and Leather, Hugh},



Download Download (PDF)   View View   Source Source   



Building effective optimization heuristics is a challenging task which often takes developers several months if not years to complete. Predictive modelling has recently emerged as a promising solution, automatically constructing heuristics from training data. However, obtaining this data can take months per platform. This is becoming an ever more critical problem and if no solution is found we shall be left with out of date heuristics which cannot extract the best performance from modern machines. In this work, we present a low-cost predictive modelling approach for automatic heuristic construction which significantly reduces this training overhead. Typically in supervised learning the training instances are randomly selected to evaluate regardless of how much useful information they carry. This wastes effort on parts of the space that contribute little to the quality of the produced heuristic. Our approach, on the other hand, uses active learning to select and only focus on the most useful training examples. We demonstrate this technique by automatically constructing a model to determine on which device to execute four parallel programs at differing problem dimensions for a representative Cpu{Gpu based heterogeneous system. Our methodology is remarkably simple and yet effective, making it a strong candidate for wide adoption. At high levels of classification accuracy the average learning speed-up is 3x, as compared to the state-of-the-art.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: