6278

A capabilities-aware framework for using computational accelerators in data-intensive computing

M. Mustafa Rafique, Ali R. Butta, Dimitrios S. Nikolopoulos
Department of Computer Science, Virginia Tech, Blacksburg, VA 24061, United States
Journal of Parallel and Distributed Computing, Volume 71, Issue 2, Pages 185-197, 2011

@article{rafique2011capabilities,

   title={A capabilities-aware framework for using computational accelerators in data-intensive computing},

   author={Rafique, M.M. and Butt, A.R. and Nikolopoulos, D.S.},

   journal={Journal of Parallel and Distributed Computing},

   volume={71},

   number={2},

   pages={185–197},

   year={2011},

   publisher={Academic Press}

}

Download Download (PDF)   View View   Source Source   

2208

views

Multicore computational accelerators such as GPUs are now commodity components for high-performance computing at scale. While such accelerators have been studied in some detail as stand-alone computational engines, their integration in large-scale distributed systems raises new challenges and trade-offs. In this paper, we present an exploration of resource management alternatives for building asymmetric accelerator-based distributed systems. We present these alternatives in the context of a capabilities-aware framework for data-intensive computing, which uses an enhanced implementation of the MapReduce programming model for accelerator-based clusters, compared to the state of the art. The framework can transparently utilize heterogeneous accelerators for deriving high performance with low programming effort. Our work is the first to compare heterogeneous types of accelerators, GPUs and a Cell processors, in the same environment and the first to explore the trade-offs between compute-efficient and control-efficient accelerators on data-intensive systems. Our investigation shows that our framework scales well with the number of different compute nodes. Furthermore, it runs simultaneously on two different types of accelerators, successfully adapts to the resource capabilities, and performs 26.9% better on average than a static execution approach.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: