Parallel Peeling Algorithms
Harvard University, School of Engineering and Applied Sciences. Supported in part by NSF grants CCF-0915922 and IIS-0964473
arXiv:1302.7014 [cs.DS], (27 Feb 2013)
@article{2013arXiv1302.7014J,
author={Jiang}, J. and {Mitzenmacher}, M. and {Thaler}, J.},
title={"{Parallel Peeling Algorithms}"},
journal={ArXiv e-prints},
archivePrefix={"arXiv"},
eprint={1302.7014},
primaryClass={"cs.DS"},
keywords={Computer Science – Data Structures and Algorithms},
year={2013},
month={feb},
adsurl={http://adsabs.harvard.edu/abs/2013arXiv1302.7014J},
adsnote={Provided by the SAO/NASA Astrophysics Data System}
}
The analysis of several algorithms and data structures can be framed as a peeling process on a random hypergraph: vertices with degree less than k are removed until there are no vertices of degree less than k left. The remaining hypergraph is known as the k-core. In this paper, we analyze parallel peeling processes, where in each round, all vertices of degree less than k are removed. It is known that, below a specific edge density threshold, the k-core is empty with high probability. We show that, with high probability, below this threshold, only (log log n)/log(k-1)(r-1) + O(1) rounds of peeling are needed to obtain the empty k-core for r-uniform hypergraphs. Interestingly, we show that above this threshold, Omega(log n) rounds of peeling are required to find the non-empty k-core. Since most algorithms and data structures aim to peel to an empty k-core, this asymmetry appears fortunate. We verify the theoretical results both with simulation and with a parallel implementation using graphical processing units (GPUs). Our implementation provides insights into how to structure parallel peeling algorithms for efficiency in practice.
March 2, 2013 by hgpu