ALPyNA: Acceleration of Loops in Python for Novel Architectures
School of Computing Science, University of Glasgow, Glasgow, UK
6th ACM SIGPLAN International Workshop on Libraries, Languages and Compilers for Array Programming (ARRAY ’19), 2019
@inproceedings{jacob2019alpyna,
title={ALPyNA: acceleration of loops in Python for novel architectures},
author={Jacob, Dejice and Singer, Jeremy},
booktitle={Proceedings of the 6th ACM SIGPLAN International Workshop on Libraries, Languages and Compilers for Array Programming},
pages={25–34},
year={2019},
organization={ACM}
}
We present ALPyNA, an automatic loop parallelization framework for Python, which analyzes data dependences within nested loops and dynamically generates CUDA kernels for GPU execution. The ALPyNA system applies classical dependence analysis techniques to discover and exploit potential parallelism. The skeletal structure of the dependence graph is determined statically (if possible) or at runtime; this is combined with type and bounds information discovered at runtime, to auto-generate high-performance kernels for offload to GPU. We demonstrate speedups of up to 1000x relative to the native CPython interpreter across four array-intensive numerical Python benchmarks. Performance improvement is related to both iteration domain size and dependence graph complexity. Nevertheless, this approach promises to bring the benefits of manycore parallelism to application developers.
September 22, 2019 by hgpu