11259

Performance Engineering for a Medical Imaging Application on the Intel Xeon Phi Accelerator

Johannes Hofmann, Jan Treibig, Georg Hager, Gerhard Wellein
Chair of Computer Architecture, University Erlangen-Nuremberg
arXiv:1401.3615 [cs.DC], (17 Dec 2013)

@article{2014arXiv1401.3615H,

   author={Hofmann}, J. and {Treibig}, J. and {Hager}, G. and {Wellein}, G.},

   title={"{Performance Engineering for a Medical Imaging Application on the Intel Xeon Phi Accelerator}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1401.3615},

   primaryClass={"cs.DC"},

   keywords={Computer Science – Distributed, Parallel, and Cluster Computing, Computer Science – Computer Vision and Pattern Recognition, Computer Science – Performance},

   year={2014},

   month={dec},

   adsurl={http://adsabs.harvard.edu/abs/2014arXiv1401.3615H},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

756

views

We examine the Xeon Phi, which is based on Intel’s Many Integrated Cores architecture, for its suitability to run the FDK algorithm–the most commonly used algorithm to perform the 3D image reconstruction in cone-beam computed tomography. We study the challenges of efficiently parallelizing the application and means to enable sensible data sharing between threads despite the lack of a shared last level cache. Apart from parallelization, SIMD vectorization is critical for good performance on the Xeon Phi; we perform various micro-benchmarks to investigate the platform’s new set of vector instructions and put a special emphasis on the newly introduced vector gather capability. We refine a previous performance model for the application and adapt it for the Xeon Phi to validate the performance of our optimized hand-written assembly implementation, as well as the performance of several different auto-vectorization approaches.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: