GPU-based real-time acoustical occlusion modeling
University of Ontario Institute of Technology, Faculty of Business and Information Technology, Health Education Technology Research Unit (HETRU), 2000 Simcoe St. North, L1H 7K4, Oshawa, ON, Canada
Virtual Reality, Volume 14, Number 3, 183-196
@article{cowan2010gpu,
title={GPU-based real-time acoustical occlusion modeling},
author={Cowan, B. and Kapralos, B.},
journal={Virtual Reality},
pages={1–14},
issn={1359-4338},
year={2010},
publisher={Springer}
}
In typical environments, the direct path between a sound source and a listener is often occluded. However, due to the phenomenon of diffraction, sound still reaches the listener by “bending” around an obstacle that lies directly in the line of straight propagation. Modeling occlusion/diffraction effects is a difficult and computationally intensive task and thus generally ignored in virtual reality and videogame applications. Driven by the gaming industry, consumer computer graphics hardware and the graphics processing unit (GPU) in particular, have greatly advanced in recent years, outperforming the computational capacity of central processing units. Given the affordability, widespread use, and availability of computer graphics hardware, here we describe a computationally efficient GPU-based method that approximates acoustical occlusion/diffraction effects in real time. Although the method has been developed primarily for videogames where occlusion/diffraction is typically overlooked, it is relevant for dynamic and interactive virtual environments as well.
December 14, 2010 by hgpu