Characterising Across-Stack Optimisations for Deep Convolutional Neural Networks
School of Informatics, University of Edinburgh, UK
2018 IEEE International Symposium on Workload Characterization (IISWC), 2018
@article{turner2018characterising,
title={Characterising Across-Stack Optimisations for Deep Convolutional Neural Networks},
author={Turner, Jack and Cano, Jos{‘e} and Radu, Valentin and Crowley, Elliot J and O’Boyle, Michael and Storkey, Amos},
year={2018}
}
Convolutional Neural Networks (CNNs) are extremely computationally demanding, presenting a large barrier to their deployment on resource-constrained devices. Since such systems are where some of their most useful applications lie (e.g. obstacle detection for mobile robots, vision-based medical assistive technology), significant bodies of work from both machine learning and systems communities have attempted to provide optimisations that will make CNNs available to edge devices. In this paper we unify the two viewpoints in a Deep Learning Inference Stack and take an across-stack approach by implementing and evaluating the most common neural network compression techniques (weight pruning, channel pruning, and quantisation) and optimising their parallel execution with a range of programming approaches (OpenMP, OpenCL) and hardware architectures (CPU, GPU). We provide comprehensive Pareto curves to instruct trade-offs under constraints of accuracy, execution time, and memory space.
September 16, 2018 by hgpu