AbstractsComputer Science

Computational Audiovisual Scene Analysis

by Rujiao Yan




Institution: Universität Bielefeld
Department:
Degree: PhD
Year: 2014
Record ID: 1118571
Full text PDF: http://pub.uni-bielefeld.de/publication/2695386


Abstract

In most real-world situations, a robot is interacting with multiple people. In this case, understanding of the dialogs is essential. However, dialog scene analysis is missing in most existing systems of human-robot interaction. In such systems, only one speaker can talk with the robot or each speaker wears an attached microphone or a headset. The target of Computational AudioVisual Scene Analysis (CAVSA) is therefore making dialogs between humans and robots more natural and flexible. The CAVSA system is able to learn how many speakers are in the scenario, where the speakers are and who is currently speaking. CAVSA is a challenging task due to the complexity of dialogue scenarios. First, speakers are unknown in advance, thus a database for training high-level features beforehand to recognize faces or voices is not available. Second, people can dynamically come into and leave the scene, may move all the time and even change their locations outside the camera field of view. Third, the robot can not see all the people at the same time due to limited camera field of view and head movements. Moreover, a sound could be related to a person who stands outside the camera field of view and has never been seen. I will show that the CAVSA system is able to assign words to corresponding speakers. A speaker is recognized again when he leaves and enters the scene, or changes his position even with a newly appearing person.