Browse > Article
http://dx.doi.org/10.3837/tiis.2022.03.018

Emotion Recognition using Short-Term Multi-Physiological Signals  

Kang, Tae-Koo (Department of Human Intelligence and Robot Engineering, Sangmyung University)
Publication Information
KSII Transactions on Internet and Information Systems (TIIS) / v.16, no.3, 2022 , pp. 1076-1094 More about this Journal
Abstract
Technology for emotion recognition is an essential part of human personality analysis. To define human personality characteristics, the existing method used the survey method. However, there are many cases where communication cannot make without considering emotions. Hence, emotional recognition technology is an essential element for communication but has also been adopted in many other fields. A person's emotions are revealed in various ways, typically including facial, speech, and biometric responses. Therefore, various methods can recognize emotions, e.g., images, voice signals, and physiological signals. Physiological signals are measured with biological sensors and analyzed to identify emotions. This study employed two sensor types. First, the existing method, the binary arousal-valence method, was subdivided into four levels to classify emotions in more detail. Then, based on the current techniques classified as High/Low, the model was further subdivided into multi-levels. Finally, signal characteristics were extracted using a 1-D Convolution Neural Network (CNN) and classified sixteen feelings. Although CNN was used to learn images in 2D, sensor data in 1D was used as the input in this paper. Finally, the proposed emotional recognition system was evaluated by measuring actual sensors.
Keywords
Convolutional Neural Network (CNN); Emotion Recognition; Physiological Signal;
Citations & Related Records
Times Cited By KSCI : 1  (Citation Analysis)
연도 인용수 순위
1 G. Yoo, S. Seo, S. Hong, and H. Kim, "Emotion extraction based on multi bio-signal using backpropagation neural network," Multimedia Tools and Applications, vol. 77, no. 4, pp. 4925-4937, 2018.   DOI
2 S. Liu, S. Wang, X. Liu, A. H. Gandomi, M. Daneshmand, K. Muhammad, and V. C. Albuquerque, "Human memory update strategy: a multi-layer template update mechanism for remote visual monitoring," IEEE Transactions on Multimedia, vol. 23, pp. 2188-2198, 2021.   DOI
3 R. Cowie, E. Douglas-Cowie, N. Tsapatsoulis, G. Votsis, S. Kollias; W. Fellenz, and J.G. Taylor, "Emotion recognition in human-computer interaction," IEEE signal processing magazine, vol. 18, no. 1, pp. 32-80, 2001.   DOI
4 M. Kim, Y. Joo, and J. Park, "Development of facial image based emotion recognition system," in Proc. of Korean Institute of Intelligent Systems (KFIS) spring conference, vol. 15, no. 1, pp. 433-436, 2005.
5 Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, "Backpropagation applied to handwritten zip code recognition," Neural Computation, vol. 1, no. 4, pp. 541-551, 1989.   DOI
6 W. Kim, "Emotional Speaker Recognition using Emotional Adaptation," The transactions of the Korean Institute of Electrical Engineers, vol. 66, pp.1105-1110, 2017.   DOI
7 U. Lundberg, R. Kadefors, B. Melin, G. Palmerud, P. Hassm_en, M. Engstrom, and I. E. Dohns, "Psychophysiological stress and EMG activity of the trapezius muscle," International journal of behavioral medicine, vol. 1, no. 4, pp. 354-370, 1994.   DOI
8 F. Zhou, S. Kong, C. Fowlkes, T. Chen, and B. Lei, "Fine-Grained Facial Expression Analysis Using Dimensional Emotion Model," Neurocomputing, vol. 392, pp 38-49, 2020.   DOI
9 Physio Lab Co., Ltd.. [Online]. Available: www.physiolab.co.kr
10 J. Christensen, Sound and the aesthetics of play, 2018, pp. 39-65. [Online] Available: https://link.springer.com/book/10.1007/978-3-319-66899-4
11 Mustaqeem and S. Kwon, "CLSTM: deep feature-based speech emotion recognition using the hierarchical ConvLSTM network," Mathematics, vol. 8, no. 12, pp. 1-19, 2020.
12 S. An, L. Ji, M. Marks, and Z. Zhang, "Two Sides of Emotion: Exploring Positivity and Negativity in Six Basic Emotions across Cultures" Frontiers in Psychology, vol. 8, no. 610, pp. 1-14, 2017.
13 Mustaqeem and S. Kwon, "Att-Net: enhanced emotion recognition system using lightweight self-attention module," Applied Soft Computing, vol. 102, no.4, pp. 1-11, 2021.
14 E. Hudlicka, "To feel or not to feel: The role of act in human-computer interaction," International journal of human-computer studies, vol. 59, no. 1-2, pp. 1-32, 2003.   DOI
15 M. Schroder, "Emotional speech synthesis: A review," in Proc. of Seventh european conference on speech communication and technology, 2001.
16 C. Anagnostopoulos, T. Iliou, and I. Giannoukos, "Features and classifiers for emotion recognition from speech: a survey from 2000 to 2011," Artificial intelligence review, vol. 43, no. 2, pp. 155-177, 2015.   DOI
17 Y. Zhong, M. Zhao, W. Yongxiong, Y. Jingdong, and Z Jianhua, "Recognition of emotions using multi-modal physiological signals and an ensemble deep learning model," Computer methods and programs in biomedicine, vol. 140, pp. 93-110, 2017.   DOI
18 S. Kim, Y. Kim, and T. Lee, "Rendering of general paralyzed patient's emotion by using EEG," in Proc. of the Korean Institute of Electrical Engineers (KIEE) summer conference, pp. 343-344, 2007.
19 C. Hieda, T. HoriiI, and T. Nagai, "Emotion Differentiation based on Decision-Making in Emotion Model," in Proc. of 27th IEEE international symposium on robot and human interactive communication (RO-MAN), pp. 659-665, 2018.
20 Y. Lee, O. Kwon, H. Shin, J. Jo, and Y. Lee, "Noise reduction of PPG signals using a particle filter for robust emotion recognition," in Proc. of 2011 IEEE International Conference on Consumer Electronics-Berlin (ICCE-Berlin), 2011.
21 S. Peng, L. Zhang, Y. Ban, M. Fang, and S. Winkler, "A Deep Network for Arousal-Valence Emotion Prediction with Acoustic-Visual Cues," arXiv preprint arXiv: 1805.00638, 2018.
22 T. Colibazzi, J. Posner, Z. Wang, D. Gorman, A. Gerber, S. Yu, H. Zhu, A. Kangarlu, Y. Duan, and J. A. Russell, "Neural systems subserving valence and arousal during the experience of induced emotions," Emotion, vol. 10, no. 3, pp. 377-289, 2010.   DOI
23 S. Koelstra, C. Muhl, M. Soleymani, J. Lee, A. Yazdani, T. Ebrahimi, T. Pun, A. Nijholt, and I. Patras, "DEAP: A Database for Emotion Analysis Using Physiological Signals," IEEE transactions on affective computing, vol. 3, no. 1, pp. 18-31, 2012.   DOI
24 J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell, "Decaf: A deep convolutional activation feature for generic visual recognition," in Proc. of International conference on machine learning, vol. 32, no.1, pp. 647-655, 2014.
25 H. Wang and S. Huang, "Musical rhythms affect heartrate variability: algorithm and models," Advances in Electrical Engineering, vol. 2014, pp. 1-14, 2014.
26 T. Barszcz, M. Bielecka, A. Bielecki, and M. Wojcik, "Wind turbines states classification by a fuzzy-ART neural network with a stereographic projection as a signal normalization," in Proc. of International Conference on Adaptive and Natural Computing Algorithms, pp. 225-234, 2011.
27 G. Wu, G. Liu, and M. Hao, "The analysis of emotion recognition from GSR based on PSO," in Proc. of International Symposium on Intelligence Information Processing and Trusted Computing (IPTC), pp. 360-363, 2010.