15684

dMath: A Scalable Linear Algebra and Math Library for Heterogeneous GP-GPU Architectures

Steven Eliuk, Cameron Upright, Anthony Skjellum
Samsung Electronics, Computing Science Innovation Center, SRA-SV, 665 Clyde Avenue, Mountain View, CA 94043
arXiv:1604.01416 [cs.NE], (5 Apr 2016)

@article{eliuk2016dmath,

   title={dMath: A Scalable Linear Algebra and Math Library for Heterogeneous GP-GPU Architectures},

   author={Eliuk, Steven and Upright, Cameron and Skjellum, Anthony},

   year={2016},

   month={apr},

   archivePrefix={"arXiv"},

   primaryClass={cs.NE}

}

Download Download (PDF)   View View   Source Source   

1972

views

A new scalable parallel math library, dMath, is presented in this paper that demonstrates leading scaling when using intranode, or internode, hybrid-parallelism for deep-learning. dMath provides easy-to-use distributed base primitives and a variety of domain-specific algorithms. These include matrix multiplication, convolutions, and others allowing for rapid development of highly scalable applications, including Deep Neural Networks (DNN), whereas previously one was restricted to libraries that provided effective primitives for only a single GPU, like Nvidia cublas and cudnn or DNN primitives from Nervana neon framework. Development of HPC software is difficult, labor-intensive work, requiring a unique skill set. dMath allows a wide range of developers to utilize parallel and distributed hardware easily. One contribution of this approach is that data is stored persistently on the GPU hardware, avoiding costly transfers between host and device. Advanced memory management techniques are utilized, including caching of transferred data and memory reuse through pooling. A key contribution of dMath is that it delivers performance, portability, and productivity to its specific domain of support. It enables algorithm and application programmers to quickly solve problems without managing the significant complexity associated with multi-level parallelism.
Rating: 2.5/5. From 3 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: