Offload Compiler Runtime for the Intel Xeon Phi Coprocessor
Intel
Intel, 2013
@article{newburn2013offload,
title={Offload Compiler Runtime for the Intel{textregistered} Xeon Phi},
author={Newburn, Chris J and Deodhar, Rajiv and Dmitriev, Serguei and Murty, Ravi and Narayanaswamy, Ravi and Wiegert, John and Chinchilla, Francisco and McGuire, Russell},
year={2013}
}
The Intel Xeon Phi coprocessor platform has a software stack that enables new programming models. One such model is offload of computation from a host processor to a coprocessor that is a fully-functional Intel Architecture CPU, namely, the Intel Xeon Phi coprocessor. The purpose of that offload is to improve response time and/or throughput. The attached paper presents the compiler offload software runtime infrastructure for the Intel Xeon Phi coprocessor, which includes a production C/C++ and Fortran compiler that enables offload to that coprocessor, and an underlying Intel Many Integrated Core (Intel MIC) platform software stack that enables offloading. The paper shares the insights that grow out of the experience of a multi-year, intensive development effort. It addresses end users’ questions about offload with the compiler offload runtime, namely, why offload to a coprocessor is useful, how it is specified, and what the conditions for the profitability of offload are. It also serves as a guide to potential third-party developers of offload runtimes, such as a gcc-based offload compiler, ports of existing commercial offloading compilers to Intel Xeon Phi coprocessor such as CAPS, and third-party offload library vendors that Intel is working with, such as NAG and MAGMA. It describes the software architecture and design of the offload compiler runtime. It enumerates the key performance features for this heterogeneous computing stack, related to initialization, data movement and invocation. Finally, it evaluates the performance impact of those features for a set of directed micro-benchmarks and larger workloads.
February 18, 2013 by hgpu