13508

Stochastic Gradient Descent on GPUs

Rashid Kaleem, Sreepathi Pai, Keshav Pingali
The University of Texas at Austin, Austin, Texas, USA
The University of Texas at Austin, 2015

@inproceedings{kaleem2015stochastic,

   title={Stochastic gradient descent on GPUs},

   author={Kaleem, Rashid and Pai, Sreepathi and Pingali, Keshav},

   booktitle={Proceedings of the 8th Workshop on General Purpose Processing using GPUs},

   pages={81–89},

   year={2015},

   organization={ACM}

}

Download Download (PDF)   View View   Source Source   

2125

views

Irregular algorithms such as Stochastic Gradient Descent (SGD) can benefit from the massive parallelism available on GPUs. However, unlike in data-parallel algorithms, synchronization patterns in SGD are quite complex. Furthermore, scheduling for scale-free graphs is challenging. This work examines several synchronization strategies for SGD, ranging from simple locking to conflict-free scheduling. We observe that static schedules do not yield better performance despite eliminating the need to perform conflict detection and resolution at runtime. We identify the source of the performance degradation to be the structure of certain parts of the graph (dense vs sparse). This classification can be used to devise hybrid scheduling strategies which exploit different schedules for different regions of the graph to obtain better performance. We found that the best schedule for some problems can be up to two orders of magnitude faster than the worst one. To evaluate the performance of our GPU implementation, we also compare against a CPU implementation of SGD. Dynamic schedules perform comparably to a 14-thread CPU implementation, while a static schedule performs comparably to a 6-thread CPU implementation.
Rating: 1.8/5. From 8 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: