Weak execution ordering – exploiting iterative methods on many-core GPUs
Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL
In 2010 IEEE International Symposium on Performance Analysis of Systems & Software, ISSPAS 2010 (March 2010), pp. 154-163
@conference{chen2010weak,
title={Weak execution ordering-exploiting iterative methods on many-core GPUs},
author={Chen, J. and Huang, Z. and Su, F. and Peir, J.K. and Ho, J. and Peng, L.},
booktitle={Performance Analysis of Systems & Software (ISPASS), 2010 IEEE International Symposium on},
pages={154–163},
year={2010},
organization={IEEE}
}
On NVIDIA’s many-core GPUs, there is no synchronization function among parallel thread blocks. When fine-granularity of data communication and synchronization is required for large-scale parallel programs executed by multiple thread blocks, frequent host synchronization are necessary, and they incur a significant overhead. In this paper, we investigate a class of applications which uses a chaotic version of iterative methods [5], [22] to obtain numerical solutions for partial differential equations (PDE). Such a fast PDE solver is parallelized on GPUs with multiple thread blocks. In this parallel implementation, although frequent data communication is needed between adjacent thread blocks, a precise order of the data communication is not necessary. Separate communication threads are used for periodically exchanging the boundary values with adjacent thread blocks through the global memory. Since a precise order of the data communication is not required, the computation and the communication threads can be overlapped to alleviate the communication overhead. Performance measurements of two popular applications, Poisson image editing from computer graphics and shape from shading from computer vision, on Tesla C1060 show that a speedup of 4-5 times is achievable for both applications in comparison with the solution using host synchronization.
January 26, 2011 by hgpu