8854

Efficient Implementation of MrBayes on multi-GPU

Jie Bao, Hongju Xia, Jianfu Zhou, Xiaoguang Liu, Gang Wang
College of Information Technical Science, Nankai University, Tianjin, China
College of Information Technical Science, Nankai University, 2013
@article{xia2013efficient,

   title={PDF Proof: Mol. Biol. Evol.},

   author={Xia, H. and Zhou, J. and Wang, G.},

   year={2013}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

1182

views

MrBayes, using Metropolis coupled Markov chain Monte Carlo [MCMCMC, or (MC)^3 for short], is a popular program for Bayesian inference. As a leading method of using DNA data to infer phylogeny, now the (MC)^3 Bayesian algorithm and its improved and parallel versions are all not fast enough for Biologists to analyze massive real-world DNA data. Recently Graphics Processor Unit (GPU) has shown its power as a co-processor (or rather, an accelerator) in many fields. This paper describes an efficient implementation a(MC)^3 [aMCMCMC] for MrBayes (MC)^3 on Compute Unified Device Architecture (CUDA). By dynamically adjusting the task granularity to adapt to input data size and hardware configuration, it makes full use of GPU cores with different data sets. An adaptive method is also developed to split and combine DNA sequences to make full use of a large number of GPU cards. Furthermore, a new "node-by-node" task scheduling strategy is developed to improve concurrency, and several optimizing methods are used to reduce extra overhead. Experimental results show that a(MC)^3 achieves up to 55x speedup over serial MrBayes on a single machine with one GPU card, and up to 154x speedup with four GPU cards, and up to 439x speedup with a 32-node GPU cluster. a(MC)^3 is dramatically faster than all the previous (MC)^3 algorithms and scales well to large GPU clusters.
VN:F [1.9.22_1171]
Rating: 5.0/5 (3 votes cast)
Efficient Implementation of MrBayes on multi-GPU, 5.0 out of 5 based on 3 ratings

* * *

* * *

TwitterAPIExchange Object
(
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
        (
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1474793570
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1474793570
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => dJt1b8EssRUIMMSf2FwqWQcv8rI=
        )

    [url] => https://api.twitter.com/1.1/users/show.json
)
Follow us on Facebook
Follow us on Twitter

HGPU group

1996 peoples are following HGPU @twitter

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: