2752

A New Data Layout For Set Intersection on GPUs

Rasmus Resen Amossen, Rasmus Pagh
IT University of Copenhagen, Denmark
arXiv:1102.1003v1 [cs.DS] (4 Feb 2011)

@article{2011arXiv1102.1003R,

   author={Resen Amossen}, R. and {Pagh}, R.},

   title={“{A New Data Layout For Set Intersection on GPUs}”},

   journal={ArXiv e-prints},

   archivePrefix={“arXiv”},

   eprint={1102.1003},

   primaryClass={“cs.DS”},

   keywords={Computer Science – Data Structures and Algorithms, Computer Science – Distributed, Parallel, and Cluster Computing},

   year={2011},

   month={feb},

   adsurl={http://adsabs.harvard.edu/abs/2011arXiv1102.1003R},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

2405

views

Set intersection is the core in a variety of problems, e.g. frequent itemset mining and sparse boolean matrix multiplication. It is well-known that large speed gains can, for some computational problems, be obtained by using a graphics processing unit (GPU) as a massively parallel computing device. However, GPUs require highly regular control flow and memory access patterns, and for this reason previous GPU methods for intersecting sets have used a simple bitmap representation. This representation requires excessive space on sparse data sets. In this paper we present a novel data layout, “BatMap”, that is particularly well suited for parallel processing, and is compact even for sparse data. Frequent itemset mining is one of the most important applications of set intersection. As a case-study on the potential of BatMaps we focus on frequent pair mining, which is a core special case of frequent itemset mining. The main finding is that our method is able to achieve speedups over both Apriori and FP-growth when the number of distinct items is large, and the density of the problem instance is above 1%. Previous implementations of frequent itemset mining on GPU have not been able to show speedups over the best single-threaded implementations.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: