9306

Automatic Compilation for Heterogeneous Architectures with Single Assignment C

Miguel Sousa Diogo
University of Amsterdam
University of Amsterdam, 2012

@article{diogo2012automatic,

   title={Automatic Compilation for Heterogeneous Architectures with Single Assignment C},

   author={Diogo, Miguel Sousa},

   year={2012}

}

Download Download (PDF)   View View   Source Source   

1741

views

In recent years, we have witnessed an increasing heterogeneity of computing resources. A typical laptop today combines at least one multicore processor with one general purpose graphics processing unit (GPGPU), while supercomputer nodes typically have several of each. Exploiting all these available computing resources effectively is very important, but also still very challenging. In this work, we developed a system to automatically parallelise code for heterogeneous computing, using both multicore-CPUs and one or several GPGPUs in parallel, based on the Single Assignment C (SaC) compiler. SaC is a functional array programming language whose compiler can perform parallelisation automatically in a data-parallel approach. The data-parallel nature of SaC programs makes them good matches for regular multicore processors and GPGPU accelerators. Both architectures are supported by the SaC compiler, yet they can’t be used simultaneously. At compile time, a choice must be made between either ignoring the GPGPUs and using the multiple cores, or using a single GPGPU with one host system core. We present in this work an extension to the compilation process of SaC to allow multiple target architectures, combining both existing code generation alternatives, without support from the programmer. We also present a corresponding runtime system to manage the multiple memories involved in the heterogeneous computing resources. We finally evaluate our work using several benchmarks, showing significant performance improvements by using heterogeneous computing.
Rating: 2.5/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: