Divergence Analysis and Optimizations

Bruno Coutinho, Diogo Sampaio, Fernando Magno Quintao Pereira, Wagner Meira Jr.
Department of Computer Science, Universidade Federal de Minas Gerais, Brazil
The Twentieth International Conference on Parallel Architectures and Compilation Techniques (PACT), 2011


   title={Divergence Analysis and Optimizations},

   author={Coutinho, B. and Sampaio, D. and Pereira, F.M.Q. and Meira Jr, W.},

   booktitle={The Twentieth International Conference on Parallel Architectures and Compilation Techniques (PACT), 2011},



Download Download (PDF)   View View   Source Source   Source codes Source codes



The growing interest in GPU programming has brought renewed attention to the Single Instruction Multiple Data (SIMD) execution model. SIMD machines give application developers a tremendous computational power; however, the model also brings restrictions. In particular, processing elements (PEs) execute in lock-step, and may lose performance due to divergences caused by conditional branches. In face of divergences, some PEs execute, while others wait; this alternation ending when they reach a synchronization point. In this paper we introduce divergence analysis, a static analysis that determines which program variables will have the same values for every PE. This analysis is useful in three different ways: it improves the translation of SIMD code to non-SIMD CPUs, it helps developers to manually improve their SIMD applications, and it also guides the compiler in the optimization of SIMD programs. We demonstrate this last point by introducing branch fusion, a new compiler optimization that identifies, via a gene sequencing algorithm, chains of similarities between divergent program paths, and weaves these paths together as much as possible. Our implementation has been accepted in the Ocelot open-source CUDA compiler, and is publicly available. We have tested it on many industrial-strength GPU benchmarks, including Rodinia and the Nvidia’s SDK. Our divergence analysis has a 34% false-positive rate, compared to the results of a dynamic profiler. Our automatic optimization adds a 3% speed-up onto parallel quicksort, a heavily optimized benchmark. Our manual optimizations extend this number to over 10%.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: