Winograd Algorithm for AdderNet
Noah’s Ark Lab, Huawei Technologies
arXiv:2105.05530 [cs.LG], (12 May 2021)
@misc{li2021winograd,
title={Winograd Algorithm for AdderNet},
author={Wenshuo Li and Hanting Chen and Mingqiang Huang and Xinghao Chen and Chunjing Xu and Yunhe Wang},
year={2021},
eprint={2105.05530},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Adder neural network (AdderNet) is a new kind of deep model that replaces the original massive multiplications in convolutions by additions while preserving the high performance. Since the hardware complexity of additions is much lower than that of multiplications, the overall energy consumption is thus reduced significantly. To further optimize the hardware overhead of using AdderNet, this paper studies the winograd algorithm, which is a widely used fast algorithm for accelerating convolution and saving the computational costs. Unfortunately, the conventional Winograd algorithm cannot be directly applied to AdderNets since the distributive law in multiplication is not valid for the l1-norm. Therefore, we replace the element-wise multiplication in the Winograd equation by additions and then develop a new set of transform matrixes that can enhance the representation ability of output features to maintain the performance. Moreover, we propose the l2-to-l1 training strategy to mitigate the negative impacts caused by formal inconsistency. Experimental results on both FPGA and benchmarks show that the new method can further reduce the energy consumption without affecting the accuracy of the original AdderNet.
May 16, 2021 by hgpu