## A GPU Implementation of Dynamic Programming for the Optimal Polygon Triangulation

Department of Information Engineering, Hiroshima University, Kagamiyama 1-4-1, Higashi-Hiroshima, 739-8527, Japan

IEICE Transactions on Information and Systems, Vol. E96-D, No. 12, pp. 2596-2603, 2013

@article{yasuaki2013gpu,

title={A GPU Implementation of Dynamic Programming for the Optimal Polygon Triangulation},

author={Yasuaki, ITO and NAKANO, Koji},

journal={IEICE TRANSACTIONS on Information and Systems},

volume={96},

number={12},

pages={2596–2603},

year={2013},

publisher={The Institute of Electronics, Information and Communication Engineers}

}

This paper presents a GPU (Graphics Processing Units) implementation of dynamic programming for the optimal polygon triangulation. Recently, GPUs can be used for general purpose parallel computation. Users can develop parallel programs running on GPUs using programming architecture called CUDA (Compute Unified Device Architecture) provided by NVIDIA. The optimal polygon triangulation problem for a convex polygon is an optimization problem to find a triangulation with minimum total weight. It is known that this problem for a convex n-gon can be solved using the dynamic programming technique in O(n3) time using a work space of size O(n2). In this paper, we propose an efficient parallel implementation of this O(n3)-time algorithm on the GPU. In our implementation, we have used two new ideas to accelerate the dynamic programming. The first idea (adaptive granularity) is to partition the dynamic programming algorithm into many sequential kernel calls of CUDA, and to select the best parameters for the size and the number of blocks for each kernel call. The second idea (sliding and mirroring arrangements) is to arrange the working data for coalesced access of the global memory in the GPU to minimize the memory access overhead. Our implementation using these two ideas solves the optimal polygon triangulation problem for a convex 8192-gon in 5.57 seconds on the NVIDIA GeForce GTX 680, while a conventional CPU implementation runs in 1939.02 seconds. Thus, our GPU implementation attains a speedup factor of 348.02.

December 15, 2013 by hgpu