19339

Performance-Oriented Neural Architecture Search

Andrew Anderson, Jing Su, Rozenn Dahyot, David Gregg
School of Computer Science and Statistics, Trinity College Dublin, Dublin, Ireland
arXiv:2001.02976 [cs.LG], (9 Jan 2020)

@misc{anderson2020performanceoriented,

   title={Performance-Oriented Neural Architecture Search},

   author={Andrew Anderson and Jing Su and Rozenn Dahyot and David Gregg},

   year={2020},

   eprint={2001.02976},

   archivePrefix={arXiv},

   primaryClass={cs.LG}

}

Download Download (PDF)   View View   Source Source   

475

views

Hardware-Software Co-Design is a highly successful strategy for improving performance of domain-specific computing systems. We argue for the application of the same methodology to deep learning; specifically, we propose to extend neural architecture search with information about the hardware to ensure that the model designs produced are highly efficient in addition to the typical criteria around accuracy. Using the task of keyword spotting in audio on edge computing devices, we demonstrate that our approach results in neural architecture that is not only highly accurate, but also efficiently mapped to the computing platform which will perform the inference. Using our modified neural architecture search, we demonstrate 0.88% increase in TOP-1 accuracy with 1.85x reduction in latency for keyword spotting in audio on an embedded SoC, and 1.59x on a high-end GPU.

Recent source codes

* * *

* * *

HGPU group © 2010-2020 hgpu.org

All rights belong to the respective authors

Contact us: