Optimizing Xeon Phi for Interactive Data Analysis

Chansup Byun, Jeremy Kepner, William Arcand, David Bestor, William Bergeron, Matthew Hubbell, Vijay Gadepally, Michael Houle, Michael Jones, Anne Klein, Lauren Milechin, Peter Michaleas, Julie Mullen, Andrew Prout, Antonio Rosa, Siddharth Samsi, Charles Yee, Albert Reuther
MIT Lincoln Laboratory Supercomputing Center
arXiv:1907.03195 [cs.PF], (6 Jul 2019)


   title={Optimizing Xeon Phi for Interactive Data Analysis},

   author={Byun, Chansup and Kepner, Jeremy and Arcand, William and Bestor, David and Bergeron, William and Hubbell, Matthew and Gadepally, Vijay and Houle, Michael and Jones, Michael and Klein, Anne and Milechin, Lauren and Michaleas, Peter and Mullen, Julie and Prout, Andrew and Rosa, Antonio and Samsi, Siddharth and Yee, Charles and Reuther, Albert},






Download Download (PDF)   View View   Source Source   



The Intel Xeon Phi manycore processor is designed to provide high performance matrix computations of the type often performed in data analysis. Common data analysis environments include Matlab, GNU Octave, Julia, Python, and R. Achieving optimal performance of matrix operations within data analysis environments requires tuning the Xeon Phi OpenMP settings, process pinning, and memory modes. This paper describes matrix multiplication performance results for Matlab and GNU Octave over a variety of combinations of process counts and OpenMP threads and Xeon Phi memory modes. These results indicate that using KMP_AFFINITY=granlarity=fine, taskset pinning, and all2all cache memory mode allows both Matlab and GNU Octave to achieve 66% of the practical peak performance for process counts ranging from 1 to 64 and OpenMP threads ranging from 1 to 64. These settings have resulted in generally improved performance across a range of applications and has enabled our Xeon Phi system to deliver significant results in a number of real-world applications.
No votes yet.
Please wait...

* * *

* * *

* * *

HGPU group © 2010-2022 hgpu.org

All rights belong to the respective authors

Contact us: