15779

Parallel Subgraph Mining on Hybrid Platforms: HPC Systems, Multi-Cores and GPUs

Nilothpal Talukder
Rensselaer Polytechnic Institute
Rensselaer Polytechnic Institute, Troy, New York, 2016
@phdthesis{talukder2016parallel,

   title={PARALLEL SUBGRAPH MINING ON HYBRID PLATFORMS: HPC SYSTEMS, MULTI-CORES AND GPUS},

   author={Talukder, Nilothpal},

   year={2016},

   school={Rensselaer Polytechnic Institute}

}

Download Download (PDF)   View View   Source Source   

313

views

Frequent subgraph mining (FSM) is an important problem in numerous application areas, such as computational chemistry, bioinformatics, social networks, computer programming languages, etc. However, the problem is computationally hard because it requires enumerating possibly an exponential number of candidate subgraph patterns, and checking their presence in a single large graph or a database of graphs. In recent years, it has become more challenging due to the rapidly increasing graph sizes. For instance, the number of users on online social networks like Facebook are now in billions. In this thesis, we present novel algorithms to scale frequent subgraph mining on a variety of computing platforms/architectures. These include High Performance Computing systems (HPC), such as IBM Blue Gene/Q, multi-core CPUs, and Graphics Processing Units (GPUs). First, we present frequent subgraph mining from a single, very large, labeled network. Our approach is the first distributed method to mine a massive input graph that is too large to fit in the memory of any individual compute node. The input graph thus has to be partitioned among the nodes, which can lead to potential false negatives. Furthermore, for scalable performance it is crucial to minimize the communication among the compute nodes. Our algorithm, DistGraph, ensures that there are no false negatives, and uses a set of optimizations and efficient collective communication operations to minimize information exchange. To our knowledge DistGraph is the first approach demonstrated to scale to graphs with over a billion vertices and edges. Scalability results on up to 2048 IBM Blue Gene/Q compute nodes, with 16 cores each, show very good speedup. The results also show orders of magnitude speedup compared with the state-of-art sequential solutions. Then, we describe a hybrid parallel approach ParGraph to mine a single moderate sized graph that fits inside the memory of a single compute node. However, this approach uses a hybrid load balancing scheme to efficiently distribute workload on both MPI and thread levels. Moreover, unlike the distributed approach it does not require synchronization among processes at each expansion level of the growing patterns, which makes it very efficient. Finally, we describe parallel frequent subgraph mining on a database of multiple labeled graphs, using Graphics Processing Units (GPUs). In recent times, GPUs have emerged as a relatively cheap but powerful architecture for general purpose computing. However, the thread-model for GPUs is different from that of CPUs, which makes the parallelization of graph mining algorithms on GPUs a challenging task. We investigate the major challenges for GPU-based graph mining, and perform extensive experiments on several real-world and synthetic datasets, achieving speedups up to 9 over a sequential algorithm running on a single-core CPU.
VN:F [1.9.22_1171]
Rating: 5.0/5 (1 vote cast)
Parallel Subgraph Mining on Hybrid Platforms: HPC Systems, Multi-Cores and GPUs, 5.0 out of 5 based on 1 rating

* * *

* * *

TwitterAPIExchange Object
(
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
        (
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1475155968
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1475155968
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => s7DQIGdOOXjO3SlU3ithU3uTAyU=
        )

    [url] => https://api.twitter.com/1.1/users/show.json
)
Follow us on Facebook
Follow us on Twitter

HGPU group

2002 peoples are following HGPU @twitter

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: