1472

Multi-scale neural texture classification using the GPU as a stream processing engine

M. Martinez-Zarzuela, F. Diaz-Pernas, M. Anton-Rodriguez, J. Diez-Higuera, D. Gonzalez-Ortega, D. Boto-Giralda, F. Lopez-Gonzalez, I. De La Torre
Edificio de Tecnologias de la Informacion y las Telecomunicaciones, Paseo Belen 15, 47011 Valladolid, Spain
Machine Vision and Applications (1 March 2010)

@article{martinez2010multi,

   title={Multi-scale neural texture classification using the GPU as a stream processing engine},

   author={Mart{‘i}nez-Zarzuela, M. and D{‘i}az-Pernas, FJ and Ant{‘o}n-Rodr{‘i}guez, M. and D{‘i}ez-Higuera, JF and Gonz{‘a}lez-Ortega, D. and Boto-Giralda, D. and L{‘o}pez-Gonz{‘a}lez, F. and De La Torre, I.},

   journal={Machine Vision and Applications},

   pages={1–20},

   issn={0932-8092},

   year={2010},

   publisher={Springer}

}

Download Download (PDF)   View View   Source Source   

1991

views

A neural architecture for texture classification running on the Graphics Processing Unit (GPU) under a stream processing model is presented in this paper. Textural features extraction is done in three different scales, it is based on the computations that take place on the mammalian primary visual pathway and incorporates both structural and color information. Feature vectors classification is done using a fuzzy neural network which introduces pattern analysis for orientation invariant texture recognition. Performance tests are done over a varying number of textures and the entire VisTex database. The intrinsic parallelism of the neural system led us to implement the whole architecture to run on GPUs, providing a speed-up between x16 and x25 for classifying textures of sizes 128×128 and 512×512 px with respect to an implementation on the CPU. A comparison of classification rates obtained with other methods is included and shows the great performance of the architecture. An average classification rate of 85.2% is obtained for 167 textures of size 512×512 px.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: