Executing Dynamic Data Rate Actor Networks on OpenCL Platforms
Center for Machine Vision and Signal Analysis, University of Oulu, Finland
arXiv:1611.03226 [cs.DC], (10 Nov 2016)
@article{boutellier2016executing,
title={Executing Dynamic Data Rate Actor Networks on OpenCL Platforms},
author={Boutellier, Jani and Hautala, Ilkka},
year={2016},
month={nov},
archivePrefix={"arXiv"},
primaryClass={cs.DC}
}
Heterogeneous computing platforms consisting of general purpose processors (GPPs) and graphics processing units (GPUs) have become commonplace in personal mobile devices and embedded systems. For years, programming of these platforms was very tedious and simultaneous use of all available GPP and GPU resources required low-level programming to ensure efficient synchronization and data transfer between processors. However, in the last few years several high-level programming frameworks have emerged, which enable programmers to describe applications by means of abstractions such as dataflow or Kahn process networks and leave parallel execution, data transfer and synchronization to be handled by the framework. Unfortunately, even the most advanced high-level programming frameworks have had shortcomings that limit their applicability to certain classes of applications. This paper presents a new, dataflow-flavored programming framework targeting heterogeneous platforms, and differs from previous approaches by allowing GPU-mapped actors to have data dependent consumption of inputs / production of outputs. Such flexibility is essential for configurable and adaptive applications that are becoming increasingly common in signal processing. In our experiments it is shown that this feature allows up to 5x increase in application throughput. The proposed framework is validated by application examples from the video processing and wireless communications domains. In the experiments the framework is compared to a well-known reference framework and it is shown that the proposed framework enables both a higher degree of flexibility and better throughput.
November 13, 2016 by hgpu