12586

Discriminative Convolutional Sum-Product Networks on GPU

Tobias Hartmann
Rheinische Friedrich-Wilhelms-Universitat Bonn
Rheinische Friedrich-Wilhelms-Universitat Bonn, 2014

@phdthesis{hartmann2014friedrich,

   title={Discriminative Convolutional Sum-Product Networks on GPU},

   author={Hartmann, Tobias},

   year={2014},

   school={Rheinische Friedrich-Wilhelms-Universit{"a}t Bonn}

}

Download Download (PDF)   View View   Source Source   

1803

views

Sum-Product Networks (SPNs) are a deep architecture recently proposed for image classification and modeling. In contrast to loopy graphical models commonly used in computer vision, exact inference and learning in SPNs is tractable. As long as consistency and completeness are ensured, an SPN allows to efficiently calculate the partition function and all marginals of graphical models. The proposed algorithms for generative and discriminative learning show good results on image classification benchmarks such as CIFAR-10. However, previous work did not learn image features from scratch, instead it builds on dictionary learning, leading to less comparable results. In this thesis we combined the two deep learning methods Convolutional Neural Networks and SPNs for image classification. To this end, we proposed a SPN implementation, operating in logspace for efficient computation on CPU and GPU, on top of convolutions. We found that some valid architectures for SPNs, lose information about locality of features, which significantly reduces learning capabilities. Due to time constraints, we were not able to wind up architectures, preserving this information. However, we were able to show that convolutions within the network learn reasonable structures, which show the functionality of this approach. The implementation was evaluated using the image classification benchmarks MNIST and CIFAR-10, achieving classification errors on the test datasets of 1:66% and 46:71% respectively.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: