17028

Improving the Neural GPU Architecture for Algorithm Learning

Karlis Freivalds, Renars Liepins
Institute of Mathematics and Computer Science University of Latvia, Raina bulvaris 29, Riga, LV-1459, Latvia
arXiv:1702.08727 [cs.NE], (28 Feb 2017)

@article{freivalds2017improving,

   title={Improving the Neural GPU Architecture for Algorithm Learning},

   author={Freivalds, Karlis and Liepins, Renars},

   year={2017},

   month={feb},

   archivePrefix={"arXiv"},

   primaryClass={cs.NE}

}

Algorithm learning is a core problem in artificial intelligence with significant implications on automation level that can be achieved by machines. Recently deep learning methods are emerging for synthesizing an algorithm from its input-output examples, the most successful being the Neural GPU, capable of learning multiplication. We present several improvements to the Neural GPU that substantially reduces training time and improves generalization. We introduce a technique of general applicability to use hard nonlinearities with saturation cost. We also introduce a technique of diagonal gates that can be applied to active-memory models. The proposed architecture is the first capable of learning decimal multiplication end-to-end.
Rating: 2.5/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: