Pipelined MapReduce: A Decoupled MapReduce RunTime for Shared Memory Multi-Processors

Konstantinos Iliakis
National Technical University of Athens
National Technical University of Athens, 2017


   title={Pipelined Mapreduce: A Decoupled Mapreduce Runtime For Shared-memory Multi-processors},

   author={H$lambda$$iota$$acute{alpha}$$kappa$$eta$$varsigma$, K$omega$$nu$$sigma$$tau$$alpha$$nu$$tau$$acute{iota}$$nu$o$varsigma$},



Download Download (PDF)   View View   Source Source   



Modern multi-processors embody up to hundreds of cores in a single chip, in an attempt to attain TFlops/sec performance. Many subtle programming frameworks have emerged in order to facilitate the development of parallel, efficient and scalable applications. The MapReduce programming model, after having indisputably, demonstrated its usability and effectiveness in the area of Large-Scale Distributed Systems computation, has been adapted to the needs of shared-memory multi-core and multi-processor systems. The scope of this thesis is to enhance the existing, traditional MapReduce Architecture, by decoupling Map from Combine into two separate phases. These independent phases are interleaved and executed in parallel. We argue that, interleaving Map and Combine computation, leads to more efficient hardware utilization and competent run-time improvements. A high-performance, shared queue data structure has been introduced in order to pipeline intermediate data from Map to Combine phase and allow for concurrent execution. Furthermore, an Inter-thread communication aware thread-to-cpu binding policy has been designed to minimize data exchange overhead. The Pipelined Architecture is evaluated into two inherently diverse multi-core systems and demonstrates execution speedup of up to 5.34X compared to a stateof-the art MapReduce Library, Phoenix++ [1]. Nevertheless, we observe that not all type of workloads profit from our Pipelined Architecture and reason about the application characteristics that define its suitability to our Runtime.
No votes yet.
Please wait...

Recent source codes

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: