DOI QR코드

DOI QR Code

Study for Classification of Facial Expression using Distance Features of Facial Landmarks

얼굴 랜드마크 거리 특징을 이용한 표정 분류에 대한 연구

  • Bae, Jin Hee (Dept. of Computer Engineering, Gachon University) ;
  • Wang, Bo Hyeon (Dept. of Computer Engineering, Gachon University) ;
  • Lim, Joon S. (Dept. of Computer Engineering, Gachon University)
  • Received : 2021.11.08
  • Accepted : 2021.12.15
  • Published : 2021.12.31

Abstract

Facial expression recognition has long been established as a subject of continuous research in various fields. In this paper, the relationship between each landmark is analyzed using the features obtained by calculating the distance between the facial landmarks in the image, and five facial expressions are classified. We increased data and label reliability based on our labeling work with multiple observers. In addition, faces were recognized from the original data and landmark coordinates were extracted and used as features. A genetic algorithm was used to select features that are relatively more helpful for classification. We performed facial recognition classification and analysis with the method proposed in this paper, which shows the validity and effectiveness of the proposed method.

표정 인식은 다양한 분야에서 지속적인 연구의 주제로서 자리 잡아 왔다. 본 논문에서는 얼굴 이미지 랜드마크 간의 거리를 계산하여 추출된 특징을 사용해 각 랜드마크들의 관계를 분석하고 5가지의 표정을 분류한다. 다수의 관측자들에 의해 수행된 라벨링 작업을 기반으로 데이터와 라벨 신뢰도를 높였다. 또한 원본 데이터에서 얼굴을 인식하고 랜드마크 좌표를 추출해 특징으로 사용하였으며 유전 알고리즘을 이용해 상대적으로 분류에 더 도움이 되는 특징을 선택하였다. 본 논문에서 제안한 방법을 이용하여 표정 인식 분류를 수행하였으며 제안된 방법을 이용하였을 때가 CNN을 이용하여 분류를 수행하였을 때 보다 성능이 향상됨을 볼 수 있었다.

Keywords

Acknowledgement

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education(2020R1I1A1A01066599) This research was supported by the MSIT(Ministry of Science and ICT), Korea, under the ITRC(Information Technology Research Center) support program(IITP-2021-2017-0-01630) supervised by the IITP(Institute for Information & communications Technology Promotion)

References

  1. Byun, In-Kyung, and Lee, Jae-Ho. "Facial Expression Research according to Arbitrary Changes in Emotions through Visual Analytic Method," The Journal of the Korea Contents Association, vol.13, no.10, pp.71-81, 2013. DOI: 10.5392/JKCA.2013.13.10.071.
  2. Dash, Manoranjan, and Huan Liu. "Feature selection for classification," Intelligent data analysis Vol.1, No.1-4 pp.131-156, 1997. https://doi.org/10.1016/S1088-467X(97)00008-5
  3. Gajarla, V., & Gupta, A. "Emotion detection and sentiment analysis of images," Georgia Institute of Technology, pp.1-4, 2015. DOI: 10.1109/ICCDW45521.2020.9318713
  4. Cohen, Michelle E., and W. J. Carr. "Facial recognition and the von Restorff effect," Bulletin of the Psychonomic Society, Vol.6, No.4, pp.383-384, 1975. DOI: 10.3758/BF03333209
  5. Li, Jiaxing, et al. "Facial expression recognition with faster R-CNN," Procedia Computer Science, Vol.107, pp.135-140, 2017. DOI: 10.1016/j.procs.2017.03.069
  6. Bartneck, Christoph, and Michael J. Lyons. "HCI and the face: Towards an art of the soluble." International Conference on Human-computer Interaction. Springer, 2007. DOI: 10.1007/978-3-540-73105-4_3
  7. Martino, L. D.; Preciozzi, J.; Lecumberry, F. "Face matching with an a-contrario false detection control," Neurocomputing, Vol.173, pp.64-71, 2016. DOI: 10.1016/j.neucom.2015.02.093
  8. Di Martino, Luis, et al. "Face matching with an a contrario false detection control," Neurocomputing, Vol.173, pp.64-71, 2016. https://doi.org/10.1016/j.neucom.2015.02.093
  9. Napoleon, Thibault, and Ayman Alfalou. "Pose invariant face recognition: 3D model from single photo," Optics and Lasers in Engineering, Vol.89, pp.150-161, 2017. DOI: 10.1016/j.optlaseng.2016.06.019
  10. Bendjillali, Ridha Ilyas, et al. "Improved facial expression recognition based on DWT feature for deep CNN," Electronics, Vol.8, No.3, pp.324, 2019. DOI: 10.3390/electronics8030324
  11. Cohen, Ira, et al. "Facial expression recognition from video sequences: temporal and static modeling," Computer Vision and image understanding, Vol091, No.1-2, pp.160-187, 2003. DOI: 10.1016/S1077-3142(03)00081-X
  12. Whitehill, Jacob, et al. "Whose vote should count more: Optimal integration of labels from labelers of unknown expertise," Advances in neural information processing systems, Vol.22, pp.2035-2043, 2009.
  13. Davis E. King. "Dlib-ml: A Machine Learning Toolkit," Journal of Machine Learning Research, Vol.10, pp.1755-1758, 2009. DOI: 10.5555/1577069.1755843
  14. Danielsson, Per-Erik. "Euclidean distance mapping," Computer Graphics and image processing, Vol.14, No.3 pp.227-248, 1980. DOI: 10.1016/0146-664X(80)90054-4
  15. Ren, Jinchang. "ANN vs. SVM: Which one performs better in classification of MCCs in mammogram imaging," Knowledge-Based Systems, Vol.26, pp.144-153, 2012. DOI: 10.1016/j.knosys.2011.07.016