17617

Synkhronos: a Multi-GPU Theano Extension for Data Parallelism

Adam Stooke, Pieter Abbeel
University of California, Berkeley
arXiv:1710.04162 [cs.DC], (11 Oct 2017)

@article{stooke2017synkhronos,

   title={Synkhronos: a Multi-GPU Theano Extension for Data Parallelism},

   author={Stooke, Adam and Abbeel, Pieter},

   year={2017},

   month={oct},

   archivePrefix={"arXiv"},

   primaryClass={cs.DC}

}

We present Synkhronos, an extension to Theano for multi-GPU computations leveraging data parallelism. Our framework provides automated execution and synchronization across devices, allowing users to continue to write serial programs without risk of race conditions. The NVIDIA Collective Communication Library is used for high-bandwidth inter-GPU communication. Further enhancements to the Theano function interface include input slicing (with aggregation) and input indexing, which perform common data-parallel computation patterns efficiently. One example use case is synchronous SGD, which has recently been shown to scale well for a growing set of deep learning problems. When training ResNet-50, we achieve a near-linear speedup of 7.5x on an NVIDIA DGX-1 using 8 GPUs, relative to Theano-only code running a single GPU in isolation. Yet Synkhronos remains general to any data-parallel computation programmable in Theano. By implementing parallelism at the level of individual Theano functions, our framework uniquely addresses a niche between manual multi-device programming and prescribed multi-GPU training routines.
Rating: 5.0/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: