A Review of CUDA, MapReduce, and Pthreads Parallel Computing Models

Kato Mivule, Benjamin Harvey, Crystal Cobb, Hoda El Sayed
Computer Science Department, Bowie State University, Bowie, MD, USA
arXiv:1410.4453 [cs.DC], (16 Oct 2014)


   author={Mivule}, K. and {Harvey}, B. and {Cobb}, C. and {El Sayed}, H.},

   title={"{A Review of CUDA, MapReduce, and Pthreads Parallel Computing Models}"},

   journal={ArXiv e-prints},




   keywords={Computer Science – Distributed, Parallel, and Cluster Computing},




   adsnote={Provided by the SAO/NASA Astrophysics Data System}


Download Download (PDF)   View View   Source Source   



The advent of high performance computing (HPC) and graphics processing units (GPU), present an enormous computation resource for Large data transactions (big data) that require parallel processing for robust and prompt data analysis. While a number of HPC frameworks have been proposed, parallel programming models present a number of challenges, for instance, how to fully utilize features in the different programming models to implement and manage parallelism via multi-threading in both CPUs and GPUs. In this paper, we take an overview of three parallel programming models, CUDA, MapReduce, and Pthreads. The goal is to explore literature on the subject and provide a high level view of the features presented in the programming models to assist high performance users with a concise understanding of parallel programming concepts and thus faster implementation of big data projects using high performance computing.
No votes yet.
Please wait...

* * *

* * *

* * *

HGPU group © 2010-2022 hgpu.org

All rights belong to the respective authors

Contact us: