11070

Learning Random Forests on the GPU

Yisheng Liao, Alex Rubinsteyn, Russell Power, Jinyang Li
Department of Computer Science, New York University
Big learning: Advances in Algorithms and Data Management, 2013
@article{liao2013learning,

   title={Learning Random Forests on the GPU},

   author={Liao, Yisheng and Rubinsteyn, Alex and Power, Russell and Li, Jinyang},

   year={2013}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

2227

views

Random Forests are a popular and powerful machine learning technique, with several fast multi-core CPU implementations. Since many other machine learning methods have seen impressive speedups from GPU implementations, applying GPU acceleration to random forests seems like a natural fit. Previous attempts to use GPUs have relied on coarse-grained task parallelism and have yielded inconclusive or unsatisfying results. We introduce CudaTree, a GPU Random Forest implementation which adaptively switches between data and task parallelism. We show that, for larger datasets, this algorithm is faster than highly tuned multi-core CPU implementations.
VN:F [1.9.22_1171]
Rating: 4.8/5 (5 votes cast)
Learning Random Forests on the GPU, 4.8 out of 5 based on 5 ratings

* * *

* * *

Follow us on Twitter

HGPU group

1858 peoples are following HGPU @twitter

Like us on Facebook

HGPU group

406 people like HGPU on Facebook

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: