14095

A Pattern Specification and Optimizations Framework for Accelerating Scientific Computations on Heterogeneous Clusters

Linchuan Chen, Xin Huo, Gagan Agrawal
Department of Computer Science and Engineering, The Ohio State University
International Parallel & Distributed Processing Symposium (IPDPS’15), 2015

@article{chen2015pattern,

   title={A Pattern Specification and Optimizations Framework for Accelerating Scientific Computations on Heterogeneous Clusters},

   author={Chen, Linchuan and Huo, Xin and Agrawal, Gagan},

   year={2015}

}

Download Download (PDF)   View View   Source Source   

1187

views

Clusters with accelerators at each node have emerged as the dominant high-end architecture in recent years. Such systems can be extremely hard to program because of the underlying heterogeneity and the need for exploiting parallelism at multiple levels. Thus, easing parallel programming today requires not only high-level programming models, but ones from which hybrid parallelism can be extracted. In this paper, we focus on the following question: "can simple APIs be developed for several classes of popular scientific applications, to ease application development and yet maintain parallel efficiency, on clusters with accelerators?". We approach this problem by individually considering popular patterns that arise in scientific computations. By developing APIs for generalized reductions, irregular reductions, and stencil computations, we show that several complex scientific applications can be supported. We enable compact specification of these applications (40% of the code size of MPI), while also enabling parallelization across nodes and devices within a node, and with work distribution across CPU and GPU cores. We enable a number of optimizations that are normally implemented by hand by scientific programmers. We compare well against existing MPI applications while scaling across nodes, and against handwritten CUDA applications for executions on a single GPU, and yet can scale by using all parallelism simultaneously. On a cluster with 64 GPUs, we achieve speedups between 600 and 1800 over sequential (single CPU core) versions.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: