Dynamic Warp Formation and Scheduling for Efficient GPU Control Flow

Wilson W. L. Fung, Ivan Sham, George Yuan, Tor M. Aamodt
Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, CANADA
In MICRO ’07: Proceedings of the 40th Annual IEEE/ACM International Symposium on Microarchitecture (2007), pp. 407-420


   title={Dynamic warp formation and scheduling for efficient gpu control flow},

   author={Fung, W.W.L. and Sham, I. and Yuan, G. and Aamodt, T.M.},

   booktitle={Proceedings of the 40th Annual IEEE/ACM International Symposium on Microarchitecture},




   organization={IEEE Computer Society}


Download Download (PDF)   View View   Source Source   



Recent advances in graphics processing units (GPUs) have resulted in massively parallel hardware that is easily programmable and widely available in commodity desktop computer systems. GPUs typically use single-instruction, multiple-data (SIMD) pipelines to achieve high performance with minimal overhead incurred by control hardware. Scalar threads are grouped together into SIMD batches, sometimes referred to as warps. While SIMD is ideally suited for simple programs, recent GPUs include control flow instructions in the GPU instruction set architecture and programs using these instructions may experience reduced performance due to the way branch execution is supported by hardware. One approach is to add a stack to allow different SIMD processing elements to execute distinct program paths after a branch instruction. The occurrence of diverging branch outcomes for different processing elements significantly degrades performance. In this paper, we explore mechanisms for more efficient SIMD branch execution on GPUs. We show that a realistic hardware implementation that dynamically regroups threads into new warps on the fly following the occurrence of diverging branch outcomes improves performance by an average of 20.7% for an estimated area increase of 4.7%.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: