Edited by Yorick Wilks
[Natural Language Processing 8] 2010
► pp. 143–156
In this chapter we present our work toward building a conversational Companion. Conversing with partner(s) means to being able to express one’s mental and emotional state, to be a speaker or a listener. One needs also to adapt to ones partner’s reactions to what one is saying. We have developed an interactive ECA platform, Greta (Pelachaud, 2005). It is a 3D virtual agent capable of communicating expressive verbal and nonverbal behaviors as well as listening. It can use its gaze, facial expressions and gestures to convey a meaning, an attitude or an emotion. Multimodal behaviors are tightly tied with each other. A synchronization scheme has been elaborated allowing the agent to display a raised eyebrow or a beat gesture on a given word. According to its emotional or mental state, the agent may vary the quality of its behaviors: it may use a more or less extended gesture, the arms can move at different speeds and with different accelerations (Mancini & Pelachaud, 2008). The agent can also display listener behavior (Bevacqua et al., 2008). It interacts actively with users and/or other agents providing appropriate timed backchannels. Interaction also means the interactants ought to adapt to each others’ behaviors and dynamic coupling between them needs to be considered (Prepin & Revel, 2007).