Learning Random Forests on the GPU
Department of Computer Science, New York University
Big learning: Advances in Algorithms and Data Management, 2013
@article{liao2013learning,
title={Learning Random Forests on the GPU},
author={Liao, Yisheng and Rubinsteyn, Alex and Power, Russell and Li, Jinyang},
year={2013}
}
Random Forests are a popular and powerful machine learning technique, with several fast multi-core CPU implementations. Since many other machine learning methods have seen impressive speedups from GPU implementations, applying GPU acceleration to random forests seems like a natural fit. Previous attempts to use GPUs have relied on coarse-grained task parallelism and have yielded inconclusive or unsatisfying results. We introduce CudaTree, a GPU Random Forest implementation which adaptively switches between data and task parallelism. We show that, for larger datasets, this algorithm is faster than highly tuned multi-core CPU implementations.
December 11, 2013 by hgpu