TY - JOUR
T1 - K-EmoCon, a multimodal sensor dataset for continuous emotion recognition in naturalistic conversations
AU - Park, Cheul Young
AU - Cha, Narae
AU - Kang, Soowon
AU - Kim, Auk
AU - Khandoker, Ahsan Habib
AU - Hadjileontiadis, Leontios
AU - Oh, Alice
AU - Jeong, Yong
AU - Lee, Uichin
N1 - Funding Information:
The 2019 KK-JRC Smart Project and the Next-Generation Information Computing Development Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT supported this research (NRF-2017M3C4A7065960). The authors cordially thank all participants for contributing their data for the development of the dataset.
Publisher Copyright:
© 2020, The Author(s).
PY - 2020/12/1
Y1 - 2020/12/1
N2 - Recognizing emotions during social interactions has many potential applications with the popularization of low-cost mobile sensors, but a challenge remains with the lack of naturalistic affective interaction data. Most existing emotion datasets do not support studying idiosyncratic emotions arising in the wild as they were collected in constrained environments. Therefore, studying emotions in the context of social interactions requires a novel dataset, and K-EmoCon is such a multimodal dataset with comprehensive annotations of continuous emotions during naturalistic conversations. The dataset contains multimodal measurements, including audiovisual recordings, EEG, and peripheral physiological signals, acquired with off-the-shelf devices from 16 sessions of approximately 10-minute long paired debates on a social issue. Distinct from previous datasets, it includes emotion annotations from all three available perspectives: self, debate partner, and external observers. Raters annotated emotional displays at intervals of every 5 seconds while viewing the debate footage, in terms of arousal-valence and 18 additional categorical emotions. The resulting K-EmoCon is the first publicly available emotion dataset accommodating the multiperspective assessment of emotions during social interactions.
AB - Recognizing emotions during social interactions has many potential applications with the popularization of low-cost mobile sensors, but a challenge remains with the lack of naturalistic affective interaction data. Most existing emotion datasets do not support studying idiosyncratic emotions arising in the wild as they were collected in constrained environments. Therefore, studying emotions in the context of social interactions requires a novel dataset, and K-EmoCon is such a multimodal dataset with comprehensive annotations of continuous emotions during naturalistic conversations. The dataset contains multimodal measurements, including audiovisual recordings, EEG, and peripheral physiological signals, acquired with off-the-shelf devices from 16 sessions of approximately 10-minute long paired debates on a social issue. Distinct from previous datasets, it includes emotion annotations from all three available perspectives: self, debate partner, and external observers. Raters annotated emotional displays at intervals of every 5 seconds while viewing the debate footage, in terms of arousal-valence and 18 additional categorical emotions. The resulting K-EmoCon is the first publicly available emotion dataset accommodating the multiperspective assessment of emotions during social interactions.
UR - http://www.scopus.com/inward/record.url?scp=85091266376&partnerID=8YFLogxK
U2 - 10.1038/s41597-020-00630-y
DO - 10.1038/s41597-020-00630-y
M3 - Article
C2 - 32901038
AN - SCOPUS:85091266376
SN - 2052-4463
VL - 7
JO - Scientific Data
JF - Scientific Data
IS - 1
M1 - 293
ER -