5635

Automatic compilation of MATLAB programs for synergistic execution on heterogeneous processors

Ashwin Prasad, Jayvant Anantpur, R. Govindarajan
Indian Institute of Science, Bangalore, India
Proceedings of the 32nd ACM SIGPLAN conference on Programming language design and implementation, PLDI ’11, 2011

@inproceedings{prasad2011automatic,

   title={Automatic compilation of MATLAB programs for synergistic execution on heterogeneous processors},

   author={Prasad, A. and Govindarajan, J.A.R.},

   booktitle={Proceedings of the 32nd ACM SIGPLAN conference on Programming language design and implementation},

   pages={152–163},

   year={2011},

   organization={ACM}

}

Download Download (PDF)   View View   Source Source   

1625

views

MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program’s execution time. Today’s computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units (GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: