16278

Runtime Configurable Deep Neural Networks for Energy-Accuracy Trade-off

Hokchhay Tann, Soheil Hashemi, R. Iris Bahar, Sherief Reda
School of Engineering Brown University Providence, RI 02912
arXiv:1607.05418 [cs.NE], (19 Jul 2016)

@article{tann2016runtime,

   title={Runtime Configurable Deep Neural Networks for Energy-Accuracy Trade-off},

   author={Tann, Hokchhay and Hashemi, Soheil and Bahar, R. Iris and Reda, Sherief},

   year={2016},

   month={jul},

   archivePrefix={"arXiv"},

   primaryClass={cs.NE},

   doi={10.1145/2968456.2968458}

}

Download Download (PDF)   View View   Source Source   

1346

views

We present a novel dynamic configuration technique for deep neural networks that permits step-wise energy-accuracy trade-offs during runtime. Our configuration technique adjusts the number of channels in the network dynamically depending on response time, power, and accuracy targets. To enable this dynamic configuration technique, we co-design a new training algorithm, where the network is incrementally trained such that the weights in channels trained in earlier steps are fixed. Our technique provides the flexibility of multiple networks while storing and utilizing one set of weights. We evaluate our techniques using both an ASIC-based hardware accelerator as well as a low-power embedded GPGPU and show that our approach leads to only a small or negligible loss in the final network accuracy. We analyze the performance of our proposed methodology using three well-known networks for MNIST, CIFAR-10, and SVHN datasets, and we show that we are able to achieve up to 95% energy reduction with less than 1% accuracy loss across the three benchmarks. In addition, compared to prior work on dynamic network reconfiguration, we show that our approach leads to approximately 50% savings in storage requirements, while achieving similar accuracy.
Rating: 1.8/5. From 3 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: