Monte Carlo randomization tests for large-scale abundance datasets on the GPU

John L. Van Hemert, Julie A. Dickerson
Bioinformatics and Computational Biology Program, Iowa State University, Ames, IA 50011, United States
Computer Methods and Programs in Biomedicine (10 June 2010)


   title={Monte Carlo randomization tests for large-scale abundance datasets on the GPU},

   author={Van Hemert, J.L. and Dickerson, J.A.},

   journal={Computer Methods and Programs in Biomedicine},





Source Source   



Statistical tests are often performed to discover which experimental variables are reacting to specific treatments. Time-series statistical models usually require the researcher to make assumptions with respect to the distribution of measured responses which may not hold. Randomization tests can be applied to data in order to generate null distributions non-parametrically. However, large numbers of randomizations are required for the precise p-values needed to control false discovery rates. When testing tens of thousands of variables (genes, chemical compounds, or otherwise), significant q-value cutoffs can be extremely small (on the order of 10^(-5) to 10^(-8)). This requires high-precision p-values, which in turn require large numbers of randomizations. The NVIDIA Compute Unified Device Architecture (CUDA) platform for General Programming on the Graphics Processing Unit (GPGPU) was used to implement an application which performs high-precision randomization tests via Monte Carlo sampling for quickly screening custom test statistics for experiments with large numbers of variables, such as microarrays, Next-Generation sequencing read counts, chromatographical signals, or other abundance measurements. The software has been shown to achieve up to more than 12 fold speedup on a Graphics Processing Unit (GPU) when compared to a powerful Central Processing Unit (CPU). The main limitation is concurrent random access of shared memory on the GPU. The software is available from the authors.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: