A middleware for efficient stream processing in CUDA
Graduate School of Information Science and Technology, Osaka University, 1-5 Yamadaoka, Suita, Osaka 565-0871, Japan
Computer Science – Research and Development, Volume 25, Numbers 1-2, 41-49 (15 April 2010)
@article{nakagawa2010middleware,
title={A middleware for efficient stream processing in CUDA},
author={Nakagawa, S. and Ino, F. and Hagihara, K.},
journal={Computer Science-Research and Development},
volume={25},
number={1},
pages={41–49},
issn={1865-2034},
year={2010},
publisher={Springer}
}
This paper presents a middleware capable of out-of-order execution of kernels and data transfers for efficient stream processing in the compute unified device architecture (CUDA). Our middleware runs on the CUDA-compatible graphics processing unit (GPU). Using the middleware, application developers are allowed to easily overlap kernel computation with data transfer between the main memory and the video memory. To maximize the efficiency of this overlap, our middleware performs out-of-order execution of commands such as kernel invocations and data transfers. This run-time capability can be used by just replacing the original CUDA API calls with our API calls. We have applied the middleware to a practical application to understand the run-time overhead in performance. It reduces execution time by 19% and allows us to process large data that cannot be entirely stored in the video memory.
November 20, 2010 by hgpu