Investigating Warp Size Impact in GPUs
School of Electrical and Computer Engineering, College of Engineering, University of Tehran
arXiv:1205.4611v1 [cs.DC] (21 May 2012)
@article{2012arXiv1205.4967L,
author={Lashgar, Ahmad and Baniasadi, Amirali and Khonsari, Ahmad},
title={"{Investigating Warp Size Impact in GPUs}"},
journal={ArXiv e-prints},
archivePrefix={"arXiv"},
eprint={1205.4967},
primaryClass={"cs.DC"},
keywords={Hardware Architecture},
year={2012},
month={may}
}
There are a number of design decisions that impact a GPU’s performance. Among such decisions deciding the right warp size can deeply influence the rest of the design. Small warps reduce the performance penalty associated with branch divergence at the expense of a reduction in memory coalescing. Large warps enhance memory coalescing significantly but also increase branch divergence. This leaves designers with two choices: use a small warps and invest in finding new solutions to enhance coalescing or use large warps and address branch divergence employing effective control-flow solutions. In this work our goal is to investigate the answer to this question. We analyze warp size impact on memory coalescing and branch divergence. We use our findings to study two machines: a GPU using small warps but equipped with excellent memory coalescing (SW+) and a GPU using large warps but employing an MIMD engine immune from control-flow costs (LW+). Our evaluations show that building coalescing-enhanced small warp GPUs is a better approach compared to pursuing a control-flow enhanced large warp GPU.
May 23, 2012 by hgpu