GPUNet: Searching the Deployable Convolution Neural Networks for GPUs
NVIDIA
arXiv:2205.00841 [cs.CV], (26 Apr 2022)
@misc{https://doi.org/10.48550/arxiv.2205.00841,
doi={10.48550/ARXIV.2205.00841},
url={https://arxiv.org/abs/2205.00841},
author={Wang, Linnan and Yu, Chenhan and Salian, Satish and Kierat, Slawomir and Migacz, Szymon and Florea, Alex Fit},
keywords={Computer Vision and Pattern Recognition (cs.CV), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title={GPUNet: Searching the Deployable Convolution Neural Networks for GPUs},
publisher={arXiv},
year={2022},
copyright={arXiv.org perpetual, non-exclusive license}
}
Customizing Convolution Neural Networks (CNN) for production use has been a challenging task for DL practitioners. This paper intends to expedite the model customization with a model hub that contains the optimized models tiered by their inference latency using Neural Architecture Search (NAS). To achieve this goal, we build a distributed NAS system to search on a novel search space that consists of prominent factors to impact latency and accuracy. Since we target GPU, we name the NAS optimized models as GPUNet, which establishes a new SOTA Pareto frontier in inference latency and accuracy. Within 1ms, GPUNet is 2x faster than EfficientNet-X and FBNetV3 with even better accuracy. We also validate GPUNet on detection tasks, and GPUNet consistently outperforms EfficientNet-X and FBNetV3 on COCO detection tasks in both latency and accuracy. All of these data validate that our NAS system is effective and generic to handle different design tasks. With this NAS system, we expand GPUNet to cover a wide range of latency targets such that DL practitioners can deploy our models directly in different scenarios.
May 8, 2022 by hgpu