18361

XGBoost: Scalable GPU Accelerated Learning

Rory Mitchell, Andrey Adinets, Thejaswi Rao, Eibe Frank
University of Waikato
arXiv:1806.11248 [cs.LG], (29 Jun 2018)

@article{mitchell2018xgboost,

   title={XGBoost: Scalable GPU Accelerated Learning},

   author={Mitchell, Rory and Adinets, Andrey and Rao, Thejaswi and Frank, Eibe},

   year={2018},

   month={jun},

   archivePrefix={"arXiv"},

   primaryClass={cs.LG}

}

We describe the multi-GPU gradient boosting algorithm implemented in the XGBoost library. Our algorithm allows fast, scalable training on multi-GPU systems with all of the features of the XGBoost library. We employ data compression techniques to minimise the usage of scarce GPU memory while still allowing highly efficient implementation. Using our algorithm we show that it is possible to process 115 million training instances in under three minutes on a publicly available cloud computing instance. The algorithm is implemented using end-to-end GPU parallelism, with prediction, gradient calculation, feature quantisation, decision tree construction and evaluation phases all computed on device.
Rating: 2.0/5. From 1 vote.
Please wait...

Recent source codes

* * *

* * *

HGPU group © 2010-2018 hgpu.org

All rights belong to the respective authors

Contact us: