12251

GPPE: a GPU-based Parallel Processing Environment for Large Scale Concurrent Data Streams

Jianwei Cao, Qingkui Chen, Songlin Zhuang
School of Optical-Electrical and Computer Engineering, Business School, University of Shanghai for Science and Technology, Shanghai, China
International Journal of Advancements in Computing Technology (IJACT), Vol. 6, No. 3, pp. 71-88, 2014

@article{cao2014gppe,

   title={GPPE: a GPU-based Parallel Processing Environment for Large Scale Concurrent Data Streams},

   author={Cao, Jianwei and Chen, Qingkui and Zhuang, Songlin},

   year={2014}

}

Download Download (PDF)   View View   Source Source   

1864

views

With Extensive use of wireless sensor network is drawing increasing attention to the research on data-driven processing but it is a challenge to construct a system of concurrent processing for large-scale data streams (LCDS), a typical model of data-driven process. As Graphic Processing Unit (GPU) has good characteristics of SPMD (Single Program Multiple Data) while LCDS which fits well for clustering process best suits SPMD of GPU, construction of scalable clusters using GPU can effectively process LCDS. In this fulfilled research, as this paper presents, formal definitions are made for data stream unit, data stream, and large-scale concurrent data stream; a clustering process model is designed to process LCDS; combining pipe with processes by CPU and GPU, a generalized process is formed to create a GPU cluster communication system integrating CPU, GPU and MPI communication mechanism, and then a GPU-based Parallel Processing Environment (GPPE) is constructed to process LCDS; finally GPPE are tested on 10080 H.264 video streams; performance and bottle neck of the GPU cluster are analyzed. The research points to the fact that the GPU cluster, with good expandability, high performance and low cost, well supports data-driven applications and suits processing cloud of large-scale data.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: