17534

FLASH: Randomized Algorithms Accelerated over CPU-GPU for Ultra-High Dimensional Similarity Search

Yiqiu Wang, Anshumali Shrivastava, Junghee Ryu
Rice University, Houston, TX
arXiv:1709.01190 [cs.DS], (4 Sep 2017)

@article{wang2017flash,

   title={FLASH: Randomized Algorithms Accelerated over CPU-GPU for Ultra-High Dimensional Similarity Search},

   author={Wang, Yiqiu and Shrivastava, Anshumali and Ryu, Junghee},

   year={2017},

   month={sep},

   archivePrefix={"arXiv"},

   primaryClass={cs.DS}

}

Download Download (PDF)   View View   Source Source   

1891

views

We present FLASH (Fast LSH Algorithm for Similarity search accelerated with HPC (High-Performance Computing)), a similarity search system for ultra-high dimensional datasets on a single machine, which does not require similarity computation. Our system is an auspicious illustration of the power of randomized algorithms carefully tailored for high-performance computing platforms. We leverage LSH style randomized indexing procedure and combine it with several principled techniques, such as reservoir sampling, recent advances in one-pass minwise hashing, and count based estimations. The combination, while retaining sound theoretical guarantees, reduces the computational as well as parallelization overhead of our proposal. We provide CPU and hybrid CPU-GPU implementations of FLASH for replicability of our results. We evaluate FLASH on several real high dimensional datasets coming from different domains including text, malicious URL, click-through prediction, social networks, etc. Our experiments shed new light on the difficulties associated with datasets having several millions of dimensions. Current state-of-the-art implementations either fail on the presented scale or are orders of magnitude slower than our system. FLASH is capable of computing an approximate k-NN graph, from scratch, over full webspam dataset (1.3 billion nonzeros) in less than 10 seconds. Computing full k-NN graph in less than 10 seconds on webspam dataset, using brute-force (n^2D), will require at least 20 TFLOPS. We hope that FLASH gets adopted in practice.
No votes yet.
Please wait...

* * *

* * *

Featured events

2018
November
27-30
Hida Takayama, Japan

The Third International Workshop on GPU Computing and AI (GCA), 2018

2018
September
19-21
Nagoya University, Japan

The 5th International Conference on Power and Energy Systems Engineering (CPESE), 2018

2018
September
22-24
MediaCityUK, Salford Quays, Greater Manchester, England

The 10th International Conference on Information Management and Engineering (ICIME), 2018

2018
August
21-23
No. 1037, Luoyu Road, Hongshan District, Wuhan, China

The 4th International Conference on Control Science and Systems Engineering (ICCSSE), 2018

2018
October
29-31
Nanyang Executive Centre in Nanyang Technological University, Singapore

The 2018 International Conference on Cloud Computing and Internet of Things (CCIOT’18), 2018

HGPU group © 2010-2018 hgpu.org

All rights belong to the respective authors

Contact us: