17363

Benchmarking Data Analysis and Machine Learning Applications on the Intel KNL Many-Core Processor

Chansup Byun, Jeremy Kepner, William Arcand, David Bestor, Bill Bergeron, Vijay Gadepally, Michael Houle, Matthew Hubbell, Michael Jones, Anna Klein, Peter Michaleas, Lauren Milechin, Julie Mullen, Andrew Prout, Antonio Rosa, Siddharth Samsi, Charles Yee, Albert Reuther
MIT Lincoln Laboratory Supercomputing Center, Lexington, MA, U.S.A
arXiv:1707.03515 [cs.PF], (12 Jul 2017)

@article{byun2017benchmarking,

   title={Benchmarking Data Analysis and Machine Learning Applications on the Intel KNL Many-Core Processor},

   author={Byun, Chansup and Kepner, Jeremy and Arcand, William and Bestor, David and Bergeron, Bill and Gadepally, Vijay and Houle, Michael and Hubbell, Matthew and Jones, Michael and Klein, Anna and Michaleas, Peter and Milechin, Lauren and Mullen, Julie and Prout, Andrew and Rosa, Antonio and Samsi, Siddharth and Yee, Charles and Reuther, Albert},

   year={2017},

   month={jul},

   archivePrefix={"arXiv"},

   primaryClass={cs.PF}

}

Download Download (PDF)   View View   Source Source   

1521

views

Knights Landing (KNL) is the code name for the second-generation Intel Xeon Phi product family. KNL has generated significant interest in the data analysis and machine learning communities because its new many-core architecture targets both of these workloads. The KNL many-core vector processor design enables it to exploit much higher levels of parallelism. At the Lincoln Laboratory Supercomputing Center (LLSC), the majority of users are running data analysis applications such as MATLAB and Octave. More recently, machine learning applications, such as the UC Berkeley Caffe deep learning framework, have become increasingly important to LLSC users. Thus, the performance of these applications on KNL systems is of high interest to LLSC users and the broader data analysis and machine learning communities. Our data analysis benchmarks of these application on the Intel KNL processor indicate that single-core double-precision generalized matrix multiply (DGEMM) performance on KNL systems has improved by ~3.5x compared to prior Intel Xeon technologies. Our data analysis applications also achieved ~60% of the theoretical peak performance. Also a performance comparison of a machine learning application, Caffe, between the two different Intel CPUs, Xeon E5 v3 and Xeon Phi 7210, demonstrated a 2.7x improvement on a KNL node.
Rating: 1.0/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: