19424

Towards High Performance Java-based Deep Learning Frameworks

Athanasios Stratikopoulos, Juan Fumero, Zoran Sevarac, Christos Kotselidis
The University of Manchester, Manchester, United Kingdom
arXiv:2001.04206 [cs.LG], (13 Jan 2020)

@misc{stratikopoulos2020high,

   title={Towards High Performance Java-based Deep Learning Frameworks},

   author={Athanasios Stratikopoulos and Juan Fumero and Zoran Sevarac and Christos Kotselidis},

   year={2020},

   eprint={2001.04206},

   archivePrefix={arXiv},

   primaryClass={cs.LG}

}

The advent of modern cloud services along with the huge volume of data produced on a daily basis, have set the demand for fast and efficient data processing. This demand is common among numerous application domains, such as deep learning, data mining, and computer vision. Prior research has focused on employing hardware accelerators as a means to overcome this inefficiency. This trend has driven software development to target heterogeneous execution, and several modern computing systems have incorporated a mixture of diverse computing components, including GPUs and FPGAs. However, the specialization of the applications’ code for heterogeneous execution is not a trivial task, as it requires developers to have hardware expertise in order to obtain high performance. The vast majority of the existing deep learning frameworks that support heterogeneous acceleration, rely on the implementation of wrapper calls from a high-level programming language to a low-level accelerator backend, such as OpenCL, CUDA or HLS. In this paper we have employed TornadoVM, a state-of-the-art heterogeneous programming framework to transparently accelerate Deep Netts; a Java-based deep learning framework. Our initial results demonstrate up to 8x performance speedup when executing the back propagation process of the network’s training on AMD GPUs against the sequential execution of the original Deep Netts framework.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: