MCUDA: An Efficient Implementation of CUDA Kernels for Multi-core CPUs
Center for Reliable and High-Performance Computing, University of Illinois at Urbana-Champaign
Languages and Compilers for Parallel Computing (2008), pp. 16-30
@article{stratton2008mcuda,
title={MCUDA: An efficient implementation of CUDA kernels for multi-core CPUs},
author={Stratton, J. and Stone, S. and Hwu, W.},
journal={Languages and Compilers for Parallel Computing},
pages={16–30},
year={2008},
publisher={Springer}
}
CUDA is a data parallel programming model that supports several key abstractions – thread blocks, hierarchical memory and barrier synchronization – for writing applications. This model has proven effective in programming GPUs. In this paper we describe a framework called MCUDA, which allows CUDA programs to be executed efficiently on shared memory, multi-core CPUs. Our framework consists of a set of source-level compiler transformations and a runtime system for parallel execution. Preserving program semantics, the compiler transforms threaded SPMD functions into explicit loops, performs fission to eliminate barrier synchronizations, and converts scalar references to thread-local data to replicated vector references. We describe an implementation of this framework and demonstrate performance approaching that achievable from manually parallelized and optimized C code. With these results, we argue that CUDA can be an effective data-parallel programming model for more than just GPU architectures.
December 12, 2010 by hgpu