TY - JOUR
T1 - Visuo-auditory Multimodal Emotional Structure to Improve Human-Robot-Interaction
AU - Prado, J. Augusto
AU - Simplício, Carlos
AU - Lori, Nicolás F.
AU - Dias, Jorge
N1 - Funding Information:
The authors gratefully acknowledge support from Institute of Systems and Robotics at University of Coimbra (ISR-UC), Portuguese Foundation for Science and Technology (FCT) [SFRH/BD/60954/2009, Ciencia2007, PTDC/SAU-BEB/100147/2008], and Polytechnical Institute of Leiria (IPL).
PY - 2012/1
Y1 - 2012/1
N2 - We propose an approach to analyze and synthesize a set of human facial and vocal expressions, and then use the classified expressions to decide the robot's response in a human-robot-interaction. During a human-to-human conversation, a person senses the interlocutor's face and voice, perceives her/his emotional expressions, and processes this information in order to decide which response to give. Moreover, observed emotions are taken into account and the response may be aggressive, funny (henceforth meaning humorous) or just neutral according to not only the observed emotions, but also the personality of the person. The purpose of our proposed structure is to endow robots with the capability to model human emotions, and thus several subproblems need to be solved: feature extraction, classification, decision and synthesis. In the proposed approach we integrate two classifiers for emotion recognition from audio and video, and then use a new method for fusion with the social behavior profile. To keep the person engaged in the interaction, after each iterance of analysis, the robot synthesizes human voice with both lips synchronization and facial expressions. The social behavior profile conducts the personality of the robot. The structure and work flow of the synthesis and decision are addressed, and the Bayesian networks are discussed. We also studied how to analyze and synthesize the emotion from the facial expression and vocal expression. A new probabilistic structure that enables a higher level of interaction between a human and a robot is proposed.
AB - We propose an approach to analyze and synthesize a set of human facial and vocal expressions, and then use the classified expressions to decide the robot's response in a human-robot-interaction. During a human-to-human conversation, a person senses the interlocutor's face and voice, perceives her/his emotional expressions, and processes this information in order to decide which response to give. Moreover, observed emotions are taken into account and the response may be aggressive, funny (henceforth meaning humorous) or just neutral according to not only the observed emotions, but also the personality of the person. The purpose of our proposed structure is to endow robots with the capability to model human emotions, and thus several subproblems need to be solved: feature extraction, classification, decision and synthesis. In the proposed approach we integrate two classifiers for emotion recognition from audio and video, and then use a new method for fusion with the social behavior profile. To keep the person engaged in the interaction, after each iterance of analysis, the robot synthesizes human voice with both lips synchronization and facial expressions. The social behavior profile conducts the personality of the robot. The structure and work flow of the synthesis and decision are addressed, and the Bayesian networks are discussed. We also studied how to analyze and synthesize the emotion from the facial expression and vocal expression. A new probabilistic structure that enables a higher level of interaction between a human and a robot is proposed.
KW - Auditory perception
KW - Bayesian networks
KW - Emotion recognition
KW - Multimodal interaction
KW - Social behavior profile
KW - Visual perception
UR - http://www.scopus.com/inward/record.url?scp=84856799639&partnerID=8YFLogxK
U2 - 10.1007/s12369-011-0134-7
DO - 10.1007/s12369-011-0134-7
M3 - Article
AN - SCOPUS:84856799639
SN - 1875-4791
VL - 4
SP - 29
EP - 51
JO - International Journal of Social Robotics
JF - International Journal of Social Robotics
IS - 1
ER -