EigenCFA: accelerating flow analysis with GPUs
University of Utah, Salt Lake City, Utah, USA
SIGPLAN Not., Vol. 46 (January 2011), pp. 511-522
@conference{prabhu2011eigencfa,
title={EigenCFA: Accelerating flow analysis with GPUs},
author={Prabhu, T. and Ramalingam, S. and Might, M. and Hall, M.},
booktitle={ACM SIGPLAN Notices},
volume={46},
number={1},
pages={511–522},
issn={0362-1340},
year={2011},
organization={ACM}
}
We describe, implement and benchmark EigenCFA, an algorithm for accelerating higher-order control-flow analysis (specifically, 0CFA) with a GPU. Ultimately, our program transformations, reductions and optimizations achieve a factor of 72 speedup over an optimized CPU implementation. We began our investigation with the view that GPUs accelerate high-arithmetic, data-parallel computations with a poor tolerance for branching. Taking that perspective to its limit, we reduced Shivers’s abstract-interpretive 0CFA to an algorithm synthesized from linear-algebra operations. Central to this reduction were “abstract” Church encodings, and encodings of the syntax tree and abstract domains as vectors and matrices. A straightforward (dense-matrix) implementation of EigenCFA performed slower than a fast CPU implementation. Ultimately, sparse-matrix data structures and operations turned out to be the critical accelerants. Because control-flow graphs are sparse in practice (up to 96% empty), our control-flow matrices are also sparse, giving the sparse matrix operations an overwhelming space and speed advantage. We also achieved speedups by carefully permitting data races. The monotonicity of 0CFA makes it sound to perform analysis operations in parallel, possibly using stale or even partially-updated data.
February 11, 2011 by hgpu