6752

Efficient parallel lists intersection and index compression algorithms using graphics processing units

Naiyong Ao, Fan Zhang, Di Wu, Douglas S. Stones, Gang Wang, Xiaoguang Liu, Jing Liu, Sheng Lin
Nankai-Baidu Joint Lab, Nankai University, 94 Weijin Road, 300071, Tianjin, China
Proceedings of the VLDB Endowment, Volume 4 Issue 8, 2011

@article{ao2011efficient,

   title={Efficient parallel lists intersection and index compression algorithms using graphics processing units},

   author={Ao, N. and Zhang, F. and Wu, D. and Stones, D.S. and Wang, G. and Liu, X. and Liu, J. and Lin, S.},

   journal={Proceedings of the VLDB Endowment},

   volume={4},

   number={8},

   pages={470–481},

   year={2011},

   publisher={VLDB Endowment}

}

Download Download (PDF)   View View   Source Source   

1822

views

Major web search engines answer thousands of queries per second requesting information about billions of web pages. The data sizes and query loads are growing at an exponential rate. To manage the heavy workload, we consider techniques for utilizing a Graphics Processing Unit (GPU). We investigate new approaches to improve two important operations of search engines — lists intersection and index compression. For lists intersection, we develop techniques for efficient implementation of the binary search algorithm for parallel computation. We inspect some representative real-world datasets and find that a sufficiently long inverted list has an overall linear rate of increase. Based on this observation, we propose Linear Regression and Hash Segmentation techniques for contracting the search range. For index compression, the traditional d-gap based compression schemata are not well-suited for parallel computation, so we propose a Linear Regression Compression schema which has an inherent parallel structure. We further discuss how to efficiently intersect the compressed lists on a GPU. Our experimental results show significant improvements in the query processing throughput on several datasets.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: