1791

Improving GPU particle filter with shader model 3.0 for visual tracking

Antonio S. Montemayor, Bryson R. Payne, Juan J. Pantrigo, Raul Cabido, Angel Sanchez, Felipe Fernandez
URJC (Madrid, SPAIN)
In SIGGRAPH ’06: ACM SIGGRAPH 2006 Research posters (2006)

@conference{montemayor2006improving,

   title={Improving GPU particle filter with shader model 3.0 for visual tracking},

   author={Montemayor, A.S. and Payne, B.R. and Pantrigo, J.J. and Cabido, R. and S{‘a}nchez, {‘A}. and Fern{‘a}ndez, F.},

   booktitle={ACM SIGGRAPH 2006 Research posters},

   pages={55},

   isbn={1595933646},

   year={2006},

   organization={ACM}

}

Download Download (PDF)   View View   Source Source   

542

views

Human-Computer Interaction is evolving towards non-contact devices using perceptual user interfaces. Recent research in human motion analysis and visual object tracking make use of the Particle Filter (PF) framework. The PF algorithm enables the modeling of a stochastic process with an arbitrary probability density function, by approximating it numerically with a set of samples called particles. The DirectX Shader Model is a common framework for accessing graphics hardware features in terms of shading functionality. In particular, ShaderModel 3.0 compliant graphics cards must support features such as dynamic branching, longer shader programs and texture lookups from vertex buffers, among others. In this work, we propose new improvements on previous CPU/GPU Particle Filter frameworks [Montemayor et al. 2004; Lanvin et al. 2005]. In particular, we have reduced bandwidth requirements in the data allocation stage using GPU texture reads instead of CPU-GPU memory transfers. But more importantly, using new features in Shader Model 3.0 we can move all the previous particle filtering CPU stages to the GPU, keeping all the computation on the video card and avoiding expensive data readback.
Rating: 2.0. From 2 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: