Computational Model of Listener Behavior for Embodied Conversational Agents
|Institution:||Univeriste Paris 8|
|Advisor(s):||Catherine Pelachaud, Isabella Poggi|
|Degree:||Ph.D. in Computer Science|
During a conversation, listeners do not assimilate passively all of the speaker's words; they actively participate in the interaction providing information about how they feel and what they think of the speaker's speech. The speaker relies on signals emitted by the listener to know if he is listening or not, understanding or not, agreeing or not, etc. This informs the speaker on the success or failure of the communication and helps him to decide how to carry on with the interaction. In this thesis, to refer to signals provided by listeners, we adopt the term backchannel proposed by Yngve. We define backchannels as acoustic and non-verbal signals provided during the speaker's turn to exchange information about the communicative functions: contact, perception, understanding, and attitude. Backchannels are emitted in a non-intrusive way: that is, without interrupting the speaker's speech.
Two fundamental characteristics of backchannels are: (i) they can be emitted at different level of intentionality; (ii) they can be reactive (deriving from a first process of perception of the speaker's speech) or response (deriving from a more aware evaluation).
A particular form of backchannel is the mimicry of the speaker's behavior. By mimicry, we mean the behavior displayed by an individual who does what another person does. This type of behavior has been proven to play quite an important role during conversations.
Due to the importance of the listener's behavior, in this thesis we propose to implement a model that generates this type of behavior for an Embodied Conversational Agent while interacting with a user. We aim to improve the human-machine interaction.