Accelerating Winograd Convolutions using Symbolic Computation and Meta-programming
Technical University of Darmstadt, Department of Computer Science, Germany
Proceedings of the Fifteenth European Conference on Computer Systems (EuroSys ’20), 2020
@inproceedings{mazaheri2020accelerating,
title={Accelerating winograd convolutions using symbolic computation and meta-programming},
author={Mazaheri, Arya and Beringer, Tim and Moskewicz, Matthew and Wolf, Felix and Jannesari, Ali},
booktitle={Proceedings of the Fifteenth European Conference on Computer Systems},
pages={1–14},
year={2020}
}
Convolution operations are essential constituents of convolutional neural networks. Their efficient and performance-portable implementation demands tremendous programming effort and fine-tuning. Winograd’s minimal filtering algorithm is a well-known method to reduce the computational complexity of convolution operations. Unfortunately, existing implementations of this algorithm are either vendor-specific or hard-coded to support a small subset of convolutions, thus limiting their versatility and performance portability. In this paper, we propose a novel method to optimize Winograd convolutions based on symbolic computation. Taking advantage meta-programming and auto-tuning, we further introduce a system to automate the generation of efficient and portable Winograd convolution code for various GPUs. We show that our optimization technique can effectively exploit repetitive patterns, enabling us to reduce the number of arithmetic operations by up to 62% without compromising numerical stability. Moreover, we demonstrate in experiments that we can generate efficient kernels with runtimes close to deep-learning libraries, requiring only a minimum of programming effort, which confirms the performance portability of our approach.
April 26, 2020 by hgpu