3641

Dynamic warp formation: Efficient MIMD control flow on SIMD graphics hardware

Wilson W. L. Fung, Ivan Sham, George Yuan, Tor M. Aamodt
University of British Columbia, Vancouver, B.C., Canada
ACM Transactions on Architecture and Code Optimization (TACO), Volume 6 Issue 2, June 2009

@article{fung2009dynamic,

   title={Dynamic warp formation: Efficient MIMD control flow on SIMD graphics hardware},

   author={Fung, W.W.L. and Sham, I. and Yuan, G. and Aamodt, T.M.},

   journal={ACM Transactions on Architecture and Code Optimization (TACO)},

   volume={6},

   number={2},

   pages={1–37},

   issn={1544-3566},

   year={2009},

   publisher={ACM}

}

Download Download (PDF)   View View   Source Source   

1676

views

Recent advances in graphics processing units (GPUs) have resulted in massively parallel hardware that is easily programmable and widely available in today’s desktop and notebook computer systems. GPUs typically use single-instruction, multiple-data (SIMD) pipelines to achieve high performance with minimal overhead for control hardware. Scalar threads running the same computing kernel are grouped together into SIMD batches, sometimes referred to as warps. While SIMD is ideally suited for simple programs, recent GPUs include control flow instructions in the GPU instruction set architecture and programs using these instructions may experience reduced performance due to the way branch execution is supported in hardware. One solution is to add a stack to allow different SIMD processing elements to execute distinct program paths after a branch instruction. The occurrence of diverging branch outcomes for different processing elements significantly degrades performance using this approach. In this article, we propose dynamic warp formation and scheduling, a mechanism for more efficient SIMD branch execution on GPUs. It dynamically regroups threads into new warps on the fly following the occurrence of diverging branch outcomes. We show that a realistic hardware implementation of this mechanism improves performance by 13%, on average, with 256 threads per core, 24% with 512 threads, and 47% with 768 threads for an estimated area increase of 8%.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: