26680

End-to-end Mapping in Heterogeneous Systems Using Graph Representation Learning

Yao Xiao, Guixiang Ma, Nesreen K. Ahmed, Mihai Capota, Theodore Willke, Shahin Nazarian, Paul Bogdan
USC, Los Angeles, CA, 90089
arXiv:2204.11981 [cs.LG], (25 Apr 2022)

@misc{https://doi.org/10.48550/arxiv.2204.11981,

   doi={10.48550/ARXIV.2204.11981},

   url={https://arxiv.org/abs/2204.11981},

   author={Xiao, Yao and Ma, Guixiang and Ahmed, Nesreen K. and Capota, Mihai and Willke, Theodore and Nazarian, Shahin and Bogdan, Paul},

   keywords={Machine Learning (cs.LG), Distributed, Parallel, and Cluster Computing (cs.DC), FOS: Computer and information sciences, FOS: Computer and information sciences},

   title={End-to-end Mapping in Heterogeneous Systems Using Graph Representation Learning},

   publisher={arXiv},

   year={2022},

   copyright={arXiv.org perpetual, non-exclusive license}

}

Download Download (PDF)   View View   Source Source   

134

views

To enable heterogeneous computing systems with autonomous programming and optimization capabilities, we propose a unified, end-to-end, programmable graph representation learning (PGL) framework that is capable of mining the complexity of high-level programs down to the universal intermediate representation, extracting the specific computational patterns and predicting which code segments would run best on a specific core in heterogeneous hardware platforms. The proposed framework extracts multi-fractal topological features from code graphs, utilizes graph autoencoders to learn how to partition the graph into computational kernels, and exploits graph neural networks (GNN) to predict the correct assignment to a processor type. In the evaluation, we validate the PGL framework and demonstrate a maximum speedup of 6.42x compared to the thread-based execution, and 2.02x compared to the state-of-the-art technique.
No votes yet.
Please wait...

* * *

* * *

* * *

HGPU group © 2010-2022 hgpu.org

All rights belong to the respective authors

Contact us: