17970

Intel nGraph: An Intermediate Representation, Compiler, and Executor for Deep Learning

Scott Cyphers, Arjun K. Bansal, Anahita Bhiwandiwalla, Jayaram Bobba, Matthew Brookhart, Avijit Chakraborty, Will Constable, Christian Convey, Leona Cook, Omar Kanawi, Robert Kimball, Jason Knight, Nikolay Korovaiko, Varun Kumar, Yixing Lao, Christopher R. Lishka, Jaikrishnan Menon, Jennifer Myers, Sandeep Aswath Narayana, Adam Procter, Tristan J. Webb
Intel
arXiv:1801.08058 [cs.DC], (30 Jan 2018)

@article{cyphers2018intel,

   title={Intel nGraph: An Intermediate Representation, Compiler, and Executor for Deep Learning},

   author={Cyphers, Scott and Bansal, Arjun K. and Bhiwandiwalla, Anahita and Bobba, Jayaram and Brookhart, Matthew and Chakraborty, Avijit and Constable, Will and Convey, Christian and Cook, Leona and Kanawi, Omar and Kimball, Robert and Knight, Jason and Korovaiko, Nikolay and Kumar, Varun and Lao, Yixing and Lishka, Christopher R. and Menon, Jaikrishnan and Myers, Jennifer and Narayana, Sandeep Aswath and Procter, Adam and Webb, Tristan J.},

   year={2018},

   month={jan},

   archivePrefix={"arXiv"},

   primaryClass={cs.DC}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

2168

views

The Deep Learning (DL) community sees many novel topologies published each year. Achieving high performance on each new topology remains challenging, as each requires some level of manual effort. This issue is compounded by the proliferation of frameworks and hardware platforms. The current approach, which we call "direct optimization", requires deep changes within each framework to improve the training performance for each hardware backend (CPUs, GPUs, FPGAs, ASICs) and requires $mathcal{O}(fp)$ effort; where f is the number of frameworks and p is the number of platforms. While optimized kernels for deep-learning primitives are provided via libraries like Intel Math Kernel Library for Deep Neural Networks (MKL-DNN), there are several compiler-inspired ways in which performance can be further optimized. Building on our experience creating neon (a fast deep learning library on GPUs), we developed Intel nGraph, a soon to be open-sourced C++ library to simplify the realization of optimized deep learning performance across frameworks and hardware platforms. Initially-supported frameworks include TensorFlow, MXNet, and Intel neon framework. Initial backends are Intel Architecture CPUs (CPU), the Intel(R) Nervana Neural Network Processor(R) (NNP), and NVIDIA GPUs. Currently supported compiler optimizations include efficient memory management and data layout abstraction. In this paper, we describe our overall architecture and its core components. In the future, we envision extending nGraph API support to a wider range of frameworks, hardware (including FPGAs and ASICs), and compiler optimizations (training versus inference optimizations, multi-node and multi-device scaling via efficient sub-graph partitioning, and HW-specific compounding of operations).
Rating: 3.0/5. From 2 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: