13244

Minerva: A Scalable and Highly Efficient Training Platform for Deep Learning

Minjie Wang, Tianjun Xiao, Jianpeng Li, Jiaxing Zhang, Chuntao Hong, Zheng Zhang
New York University
Microsoft Research, 2014

@Proceedings{export:232557,

   author={Minjie Wang and Tianjun Xiao and Jianpeng Li and Jiaxing Zhang and Chuntao Hong and Zheng Zhang},

   month={November},

   publisher={ACM – Association for Computing Machinery},

   title={Minerva: A Scalable and Highly Efficient Training Platform for Deep Learning},

   url={http://research.microsoft.com/apps/pubs/default.aspx?id=232557},

   year={2014}

}

Download Download (PDF)   View View   Source Source   

2024

views

The tooling landscape of deep learning is fragmented by a growing gap between the generic and productivity-oriented tools that optimize for algorithm development and the task-specific ones that optimize for speed and scale. This creates an artificial barrier to bring new innovations into real-world applications. Minerva addresses this issue with a layered design that provides language flexibility and execution efficiency simultaneously within one coherent framework. It proposes a matrix-based API, resulting in compact codes and the Matlab-like, imperative and procedural coding style. The code is dynamically translated into an internal dataflow representation, which is then efficiently executed against different hardware. The same user code runs on modern laptop and workstation, high-end multi-core server, or server clusters, with and without GPU acceleration, delivering performance and scalability better than or competitive with existing tools on different platforms.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: