BrainSlug: Transparent Acceleration of Deep Learning Through Depth-First Parallelism
title={BrainSlug: Transparent Acceleration of Deep Learning Through Depth-First Parallelism},
author={Weber, Nicolas and Schmidt, Florian and Niepert, Mathias and Huici, Felipe},
year={2018},
month={apr},
archivePrefix={"arXiv"},
primaryClass={cs.DC}
}
Neural network frameworks such as PyTorch and TensorFlow are the workhorses of numerous machine learning applications ranging from object recognition to machine translation. While these frameworks are versatile and straightforward to use, the training of and inference in deep neural networks is resource (energy, compute, and memory) intensive. In contrast to recent works focusing on algorithmic enhancements, we introduce BrainSlug, a framework that transparently accelerates neural network workloads by changing the default layer-by-layer processing to a depth-first approach, reducing the amount of data required by the computations and thus improving the performance of the available hardware caches. BrainSlug achieves performance improvements of up to 41.1% on CPUs and 35.7% on GPUs. These optimizations come at zero cost to the user as they do not require hardware changes and only need tiny adjustments to the software.