Weighted Class Loss for Single-Staged Facial Emotion Recognition

  • Jo Vianto (Dept. of AI Convergence, Chonnam National University) ;
  • Hyung-Jeong Yang (Dept. of AI Convergence, Chonnam National University) ;
  • Seung-won Kim (Dept. of AI Convergence, Chonnam National University) ;
  • Ji-eun Shin (Dept. of Psychology, Chonnam National University) ;
  • Soo-Hyung Kim (Dept. of AI Convergence, Chonnam National University)
  • 발행 : 2024.10.31

초록

Facial emotion recognition (FER) is becoming crucial in fields like human-computer interaction and surveillance. Traditional FER systems rely on two-stage models with face alignment preprocessing, which increases complexity and inference time. In this research, we propose a single-stage approach using YOLOv6 combined with weighted class loss to address these inefficiencies. Our method improves computational efficiency while enhancing the detection of minority classes in imbalanced emotion datasets. The experiments demonstrate that although the weighted loss function helps with class detection, it slightly reduces overall accuracy. Nevertheless, the model shows promise for real-time FER applications, balancing accuracy and speed. This work not only introduces a more efficient approach but also highlights the potential of single-stage models in advancing emotion recognition tasks.

키워드

과제정보

This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) under the Artificial Intelligence Convergence Innovation Human Resources Development (IITP-2023-RS-2023-00256629) grant funded by the Korea government(MSIT). This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (RS-2023-00219107). This research was supported by the MSIT(Ministry of Science and ICT), Korea, under the Innovative Human Resource Development for Local Intellectualization support program(IITP-2023-RS-2022-00156287) supervised by the IITP(Institute for Information & communications Technology Planning & Evaluation).

참고문헌

  1. P. Viola and M. Jones, "Rapid Object Detection using a Boosted Cascade of Simple Features," 2001.
  2. S. Li and W. Deng, "Deep Facial Expression Recognition: A Survey," IEEE Trans Affect Comput, vol. 13, no. 3, pp. 1195-1215, 2022, doi: 10.1109/TAFFC.2020.2981446.
  3. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You Only Look Once: Unified, Real-Time Object Detection," Jun. 2015, [Online]. Available: http://arxiv.org/abs/1506.02640
  4. C. Li et al., "YOLOv6 v3.0: A Full-Scale Reloading," Jan. 2023, [Online]. Available: http://arxiv.org/abs/2301.05586
  5. Chaitanya, S. Sarath, Malavika, Prasanna, and Karthik, "Human Emotions Recognition from Thermal Images using Yolo Algorithm," in 2020 International Conference on Communication and Signal Processing (ICCSP), IEEE, Jul. 2020, pp. 1139-1142. doi: 10.1109/ICCSP48568.2020.9182148.
  6. R. Huang, J. Pedoeem, and C. Chen, "YOLO-LITE: A Real-Time Object Detection Algorithm Optimized for Non-GPU Computers," Proceedings - 2018 IEEE International Conference on Big Data, Big Data 2018, pp. 2503-2510, 2019, doi: 10.1109/BigData.2018.8621865.
  7. L. Wang, S. Xu, X. Wang, and Q. Zhu, "Eavesdrop the Composition Proportion of Training Labels in Federated Learning," Oct. 2019, [Online]. Available: http://arxiv.org/abs/1910.06044
  8. Z. Zhang, P. Luo, C. C. Loy, and X. Tang, "Learning Social Relation Traits from Face Images," Sep. 2015, [Online]. Available: http://arxiv.org/abs/1509.03936
  9. Z. Zhang et al., "From Facial Expression Recognition to Interpersonal Relation Prediction." [Online]. Available: www.rferl.org