Aurally and visually enhanced audio search with soundtorch
Hochschule Bremen (University of Applied Sciences), Bremen, UNK, Germany
In Proceedings of the 27th international conference extended abstracts on Human factors in computing systems (2009), pp. 3241-3246
@conference{heise2009aurally,
title={Aurally and visually enhanced audio search with soundtorch},
author={Heise, S. and Hlatky, M. and Loviscach, J.},
booktitle={Proceedings of the 27th international conference extended abstracts on Human factors in computing systems},
pages={3241–3246},
year={2009},
organization={ACM}
}
Finding a specific or an artistically appropriate sound in a vast collection comprising thousands of audio files containing recordings of, say, footsteps, gunshots, and thunderclaps easily becomes a chore. To improve on this, we have developed an enhanced auditory and graphical zoomable user interface that leverages the human brain’s capability to single out sounds from a spatial mixture: The user shines a virtual flashlight onto an automatically created 2D arrangement of icons that represent sounds. All sounds within the light cone are played back in parallel through a surround sound system. A GPU-accelerated visualization facilitates identifying the icons on the screen with acoustic items in the dense cloud of sound. Test show that the user can pick the “right” sounds more quickly and/or with more fun than with standard file-by-file auditioning.
November 27, 2010 by hgpu