17406

Memory-Efficient Implementation of DenseNets

Geoff Pleiss, Danlu Chen, Gao Huang, Tongcheng Li, Laurens van der Maaten, Kilian Q. Weinberger
Cornell University
arXiv:1707.06990 [cs.CV], (21 Jul 2017)

@article{pleiss2017memoryefficient,

   title={Memory-Efficient Implementation of DenseNets},

   author={Pleiss, Geoff and Chen, Danlu and Huang, Gao and Li, Tongcheng and Maaten, Laurens van der and Weinberger, Kilian Q.},

   year={2017},

   month={jul},

   archivePrefix={"arXiv"},

   primaryClass={cs.CV}

}

The DenseNet architecture is highly computationally efficient as a result of feature reuse. However, a naive DenseNet implementation can require a significant amount of GPU memory: If not properly managed, pre-activation batch normalization and contiguous convolution operations can produce feature maps that grow quadratically with network depth. In this technical report, we introduce strategies to reduce the memory consumption of DenseNets during training. By strategically using shared memory allocations, we reduce the memory cost for storing feature maps from quadratic to linear. Without the GPU memory bottleneck, it is now possible to train extremely deep DenseNets. Networks with 14M parameters can be trained on a single GPU, up from 4M. A 264-layer DenseNet (73M parameters), which previously would have been infeasible to train, can now be trained on a single workstation with 8 NVIDIA Tesla M40 GPUs. On the ImageNet ILSVRC classification dataset, this large DenseNet obtains a state-of-the-art single-crop top-1 error of 20.26%.
Rating: 1.8/5. From 3 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: