TY - GEN
T1 - Multimodal Analysis of Physiological Signals for Wearable-Based Emotion Recognition Using Machine Learning
AU - Alskafi, Feryal Amjad
AU - Khandoker, Ahsan H.
AU - Lee, Uichin
AU - Park, Cheul Young
AU - Jelinek, Herbert F.
N1 - Publisher Copyright:
© 2022 Creative Commons.
PY - 2022
Y1 - 2022
N2 - Recent advancements in wearable technology and machine learning have led to an increased research interest in the use of peripheral physiological signals to recognize emotion granularity. In healthcare, the ability to create an algorithm that classifies emotion content can aid in the development of treatment protocols for psychopathology and chronic disease. The non-invasive nature of peripheral physiological signals however is usually of low quality due to low sampling rates. As a result, single-mode physiological signal-based emotion recognition shows low performance. In this research, we explore the use of multi-modal wearable-based emotion recognition using the K-EmoCon dataset. Physiological signals in addition to self-reported arousal and valence records were analyzed with a battery of datamining algorithms including decision trees, support vector machines, k-nearest neighbors, and ensembles. Performance was evaluated using accuracy, true positive rate, and area under the receiver operating characteristic curve. Results support the assumption with 83% average accuracy when using an ensemble bagged tree algorithm compared to single heart rate-based emotion accuracy of 56.1%. Emotion granularity can be identified by wearables with multi-modal signal recording capabilities that improve diagnostics and possibly treatment efficacy.
AB - Recent advancements in wearable technology and machine learning have led to an increased research interest in the use of peripheral physiological signals to recognize emotion granularity. In healthcare, the ability to create an algorithm that classifies emotion content can aid in the development of treatment protocols for psychopathology and chronic disease. The non-invasive nature of peripheral physiological signals however is usually of low quality due to low sampling rates. As a result, single-mode physiological signal-based emotion recognition shows low performance. In this research, we explore the use of multi-modal wearable-based emotion recognition using the K-EmoCon dataset. Physiological signals in addition to self-reported arousal and valence records were analyzed with a battery of datamining algorithms including decision trees, support vector machines, k-nearest neighbors, and ensembles. Performance was evaluated using accuracy, true positive rate, and area under the receiver operating characteristic curve. Results support the assumption with 83% average accuracy when using an ensemble bagged tree algorithm compared to single heart rate-based emotion accuracy of 56.1%. Emotion granularity can be identified by wearables with multi-modal signal recording capabilities that improve diagnostics and possibly treatment efficacy.
UR - https://www.scopus.com/pages/publications/85152910217
U2 - 10.22489/CinC.2022.328
DO - 10.22489/CinC.2022.328
M3 - Conference contribution
AN - SCOPUS:85152910217
T3 - Computing in Cardiology
BT - 2022 Computing in Cardiology, CinC 2022
PB - IEEE Computer Society
T2 - 2022 Computing in Cardiology, CinC 2022
Y2 - 4 September 2022 through 7 September 2022
ER -