3689

Multimodal collaboration and human-computer interaction

Zhengyou Zhang
Microsoft Research, Redmond, WA, USA
IEEE International Conference on Multimedia and Expo, 2009. ICME 2009

@conference{zhang2009multimodal,

   title={Multimodal collaboration and human-computer interaction},

   author={Zhang, Z.},

   booktitle={Multimedia and Expo, 2009. ICME 2009. IEEE International Conference on},

   pages={1596–1599},

   issn={1945-7871},

   year={2009},

   organization={IEEE}

}

Source Source   

1121

views

The research effort at Microsoft research on multimodal collaboration and human-computer interaction aims at developing tools that allow people across geographically distributed sites to interact collaboratively with immersive experience. Our prototype systems consist of cameras, displays, speakers, microphones, computer controllable lights, and/or input devices such as touch sensitive surface, stylus, keyboard, and mouse. They require real-time processing a huge amount of data, such as foreground-background substraction, region-of-interest extraction, color estimation and correction, speaker detection, stereo matching, 3D reconstruction and rendering, without mentioning audio and video encoding and decoding possibly involving multiple microphones and cameras. Some of the processing can be easily parallelizable through general-purpose computation on graphics processing units (GPGPU) or on a multi-core processor machine, while others are not so trivial. In this extended summary, the author describe two projects: Visual echo cancellation in shared tele-collaborative space, and distributed meeting capture and broadcasting system. During the talk, the author will also present two recent projects: personal telepresence station and situated interaction.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: