Robot emotional state through Bayesian visuo-auditory perception

José Augusto Prado, Carlos Simplício, Jorge Dias

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In this paper we focus on auditory analysis as the sensory stimulus, and on vocalization synthesis as the output signal. Our scenario is to have one robot interacting with one human through vocalization channel. Notice that vocalization is far beyond speech; while speech analysis would give us what was said, vocalization analysis gives us how was said. A social robot shall be able to perform actions in different manners according to its emotional state. Thus we propose a novel Bayesian approach to determine the emotional state the robot shall assume according to how the interlocutor is talking to it. Results shows that the classification happens as expected converging to the correct decision after two iterations.

Original languageBritish English
Title of host publicationTechnological Innovation for Sustainability - Second IFIP WG 5.5/SOCOLNET Doctoral Conference on Computing, Electrical and Industrial Systems, DoCEIS 2011, Proceedings
Pages165-172
Number of pages8
DOIs
StatePublished - 2011

Publication series

NameIFIP Advances in Information and Communication Technology
Volume349 AICT
ISSN (Print)1868-4238

Keywords

  • Auditory Perception
  • Bayesian Approach
  • Robot Emotional State
  • Vocalization

Fingerprint

Dive into the research topics of 'Robot emotional state through Bayesian visuo-auditory perception'. Together they form a unique fingerprint.

Cite this