Model-Based Warp-Level Tiling for Image Processing Programs on GPUs
University of Massachusetts Amherst, USA
arXiv:1909.07190 [cs.PL], (16 Sep 2019)
@misc{jangda2019modelbased,
title={Model-Based Warp-Level Tiling for Image Processing Programs on GPUs},
author={Abhinav Jangda and Arjun Guha},
year={2019},
eprint={1909.07190},
archivePrefix={arXiv},
primaryClass={cs.PL}
}
The efficient execution of image processing pipelines on GPUs is an area of active research. The state-of-art involves 1) dividing portions of an image into overlapped tiles, where each tile can be processed by a single thread block and 2) fusing loops together to improve memory locality. However, the state-of-the-art has two limitations: 1) synchronization between all threads across thread blocks has a nontrivial cost and 2) autoscheduling algorithms use an overly simplified model of GPUs. This paper presents a new approach for optimizing image processing programs on GPUs. First, we fuse loops to form overlapped tiles that can be processed by a single warp, which allows us to use lightweight warp synchronization. Second, we introduce hybrid tiling, which is an approach that partially stores overlapped regions in thread-local registers instead of shared memory, thus increasing occupancy and providing faster access to data. Hybrid tiling leverages the warp shuffling capabilities of recent GPUs. Finally, we present an automatic loop fusion algorithm that considers several factors that affect the performance of GPU kernels. We implement these techniques in a new GPU-based backend for PolyMage, which is a DSL embedded in Python for describing image processing pipelines. Using standard benchmarks, our approach produces code that is 2.15x faster than manual schedules, 6.83x faster than Halide’s GPU auto-scheduler, and 5.16x faster than autotuner on an NVIDIA GTX 1080Ti. Using a Tesla V100 GPU, our approach is 1.73x faster than manual schedules, 2.67x faster than the auto-scheduler, and 8x faster than autotuner.
September 22, 2019 by hgpu