15975

OpenMP Parallelization and Optimization of Graph-based Machine Learning Algorithms

Zhaoyi Meng, Alice Koniges, Yun (Helen) He, Samuel Williams, Thorsten Kurth, Brandon Cook, Jack Deslippe, Andrea L. Bertozzi
University of California, Los Angeles, US
UCLA Computational and Applied Mathematics Reports 16-35, 2016

@article{meng2016openmp,

   title={OpenMP Parallelization and Optimization of Graph-based Machine Learning Algorithms},

   author={Meng, Zhaoyi and Koniges, Alice and He, Yun Helen and Williams, Samuel and Kurth, Thorsten and Cook, Brandon and Deslippe, Jack and Bertozzi, Andrea L},

   year={2016}

}

Download Download (PDF)   View View   Source Source   

1390

views

We investigate the OpenMP parallelization and optimization of two novel data classification algorithms. The new algorithms are based on graph and PDE solution techniques and provide significant accuracy and performance advantages over traditional data classification algorithms in serial mode. The methods leverage the Nystrom extension to calculate eigenvalue/eigenvectors of the graph Laplacian and this is a self-contained module that can be used in conjunction with other graphLaplacian based methods such as spectral clustering. We use performance tools to collect the hotspots and memory access of the serial codes and use OpenMP as the parallelization language to parallelize the most timeconsuming parts. Where possible, we also use library routines. We then optimize the OpenMP implementations and detail the performance on traditional supercomputer nodes (in our case a Cray XC30), and predict behavior on emerging testbed systems based on Intel’s Knights Corner and Landing processors. We show both performance improvement and strong scaling behavior. A large number of optimization techniques and analyses are necessary before the algorithm reaches almost ideal scaling.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: