Caffe con Troll: Shallow Ideas to Speed Up Deep Learning
Stanford University
arXiv:1504.04343 [cs.LG], (16 Apr 2015)
@article{abuzaid2015caffe,
title={Caffe con Troll: Shallow Ideas to Speed Up Deep Learning},
author={Abuzaid, Firas and Hadjis, Stefan and Zhang, Ce and Re, Christopher},
year={2015},
month={apr},
archivePrefix={"arXiv"},
primaryClass={cs.LG}
}
We present Caffe con Troll (CcT), a fully compatible end-to-end version of the popular framework Caffe with rebuilt internals. We built CcT to examine the performance characteristics of training and deploying general-purpose convolutional neural networks across different hardware architectures. We find that, by employing standard batching optimizations for CPU training, we achieve up to one order of magnitude throughput improvement on convolutional layers over Caffe. Moreover, with these improvements, the end-to-end training time for CNNs is directly proportional to the FLOPS delivered by the CPU, which enables us to efficiently train hybrid CPU-GPU systems for CNNs.
April 20, 2015 by hgpu