Parallel mutual information estimation for inferring gene regulatory networks on GPUs

Haixiang Shi, Bertil Schmidt, Weiguo Liu, Wolfgang Muller-Wittig
School of Computer Engineering, Nanyang Technological University, Singapore
BMC Research Notes, 4:189, 2011


   title={Parallel mutual information estimation for inferring gene regulatory networks on GPUs},

   author={Shi, H. and Schmidt, B. and Liu, W. and M{"u}ller-Wittig, W.},

   journal={BMC research notes},





   publisher={BioMed Central Ltd}


Download Download (PDF)   View View   Source Source   Source codes Source codes




BACKGROUND: Mutual information is a measure of similarity between two variables. It has been widely used in various application domains including computational biology, machine learning, statistics, image processing, and financial computing. Previously used simple histogram based mutual information estimators lack the precision in quality compared to kernel based methods. The recently introduced B-spline function based mutual information estimation method is competitive to the kernel based methods in terms of quality but at a lower computational complexity. RESULTS: We present a new approach to accelerate the B-spline function based mutual information estimation algorithm with commodity graphics hardware. To derive an efficient mapping onto this type of architecture, we have used the Compute Unified Device Architecture (CUDA) programming model to design and implement a new parallel algorithm. Our implementation, called CUDA-MI, can achieve speedups of up to 82 using double precision on a single GPU compared to a multi-threaded implementation on a quad-core CPU for large microarray datasets. We have used the results obtained by CUDA-MI to infer gene regulatory networks (GRNs) from microarray data. The comparisons to existing methods including ARACNE and TINGe show that CUDA-MI produces GRNs of higher quality in less time. CONCLUSIONS: CUDA-MI is publicly available open-source software, written in CUDA and C++ programming languages. It obtains significant speedup over sequential multi-threaded implementation by fully exploiting the compute capability of commonly used CUDA-enabled low-cost GPUs.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: