A Review of CUDA, MapReduce, and Pthreads Parallel Computing Models
Computer Science Department, Bowie State University, Bowie, MD, USA
arXiv:1410.4453 [cs.DC], (16 Oct 2014)
@article{2014arXiv1410.4453M,
author={Mivule}, K. and {Harvey}, B. and {Cobb}, C. and {El Sayed}, H.},
title={"{A Review of CUDA, MapReduce, and Pthreads Parallel Computing Models}"},
journal={ArXiv e-prints},
archivePrefix={"arXiv"},
eprint={1410.4453},
primaryClass={"cs.DC"},
keywords={Computer Science – Distributed, Parallel, and Cluster Computing},
year={2014},
month={oct},
adsurl={http://adsabs.harvard.edu/abs/2014arXiv1410.4453M},
adsnote={Provided by the SAO/NASA Astrophysics Data System}
}
The advent of high performance computing (HPC) and graphics processing units (GPU), present an enormous computation resource for Large data transactions (big data) that require parallel processing for robust and prompt data analysis. While a number of HPC frameworks have been proposed, parallel programming models present a number of challenges, for instance, how to fully utilize features in the different programming models to implement and manage parallelism via multi-threading in both CPUs and GPUs. In this paper, we take an overview of three parallel programming models, CUDA, MapReduce, and Pthreads. The goal is to explore literature on the subject and provide a high level view of the features presented in the programming models to assist high performance users with a concise understanding of parallel programming concepts and thus faster implementation of big data projects using high performance computing.
October 18, 2014 by hgpu