DOI QR코드

DOI QR Code

다중 레이블 분류를 활용한 안면 피부 질환 인식에 관한 연구

A Study on Facial Skin Disease Recognition Using Multi-Label Classification

  • 임채현 (숭실대학교 소프트웨어학과) ;
  • 손민지 (숭실대학교 융합소프트웨어학과) ;
  • 김명호 (숭실대학교 소프트웨어학과)
  • 투고 : 2021.03.29
  • 심사 : 2021.07.14
  • 발행 : 2021.12.31

초록

최근 안면 피부 미용에 대한 사람들의 관심이 높아짐에 따라 딥 러닝을 활용한 안면 피부 미용을 위한 피부 질환 인식 연구가 진행되고 있다. 이러한 연구들은 여드름을 비롯한 다양한 피부 질환을 인식한다. 기존의 연구들은 단일 피부 질환만을 인식하지만, 안면에 발생하는 피부 질환은 더 다양하고 복합적으로 발생할 수 있다. 따라서 본 논문에서는 Inception-ResNet V2 모델을 활용하여 다중 레이블 분류 방법으로 여드름, 블랙헤드, 주근깨, 검버섯, 일반 피부, 화이트헤드에 관한 복합적인 피부 질환을 인식한다. 사용한 평가 지표 중 정확도는 98.8%, 해밍 손실은 0.003을 달성하였고, 단일 클래스별 정밀도, 재현율, F1-점수는 모두 96.6% 이상을 달성하였다.

Recently, as people's interest in facial skin beauty has increased, research on skin disease recognition for facial skin beauty is being conducted by using deep learning. These studies recognized a variety of skin diseases, including acne. Existing studies can recognize only the single skin diseases, but skin diseases that occur on the face can enact in a more diverse and complex manner. Therefore, in this paper, complex skin diseases such as acne, blackheads, freckles, age spots, normal skin, and whiteheads are identified using the Inception-ResNet V2 deep learning mode with multi-label classification. The accuracy was 98.8%, hamming loss was 0.003, and precision, recall, F1-Score achieved 96.6% or more for each single class.

키워드

참고문헌

  1. Korean Dermatological Society Textbook Compilation Committee, "Textbook of Dermatology," 6th ed., Korean Medical Books, 2014.
  2. X. Wu, N. Wen, J. Liang, Y. K. Lai, D. She, M. M. Cheng, and J. Yang, "Joint acne image grading and counting via label distribution learning," in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.10642-10651, 2019.
  3. T. Zhao, H. Zhang, and J. Spoelstra, "A computer vision application for assessing facial acne severity from selfie images," arXiv preprint arXiv:1907.07901, 2019.
  4. X. Shen, J. Zhang, C. Yan, and H. Zhou, "An automatic diagnosis method of facial acne vulgaris based on convolutional neural network," Scientific Reports, Vol.8, No.5839, pp.1-10, 2018.
  5. M. S. Junayed, A. A. Jeny, S. T. Atik, N. Neehal, A. Karim, S. Azam, and B. Shanmugam, "AcneNet-A deep CNN based classification approach for acne classes," in 2019 12th International Conference on Information & Communication Technology and System (ICTS), IEEE, pp.203-208, 2019.
  6. Z. Wu, et al., "Studies on different CNN algorithms for face skin disease classification based on clinical images," IEEE Access, Vol.7, pp.66505-66511, 2019. https://doi.org/10.1109/access.2019.2918221
  7. E. Goceri, "Analysis of deep networks with residual blocks and different activation functions: classification of skin diseases," in 2019 Ninth International Conference on Image Processing Theory, Tools and Applications (IPTA), IEEE, pp.1-6, 2019.
  8. M. A. Albahar, "Skin lesion classification using convolutional neural network with novel regularizer," IEEE Access, Vol.7, pp.38306-38313, 2019. https://doi.org/10.1109/access.2019.2906241
  9. X. Sun, J. Yang, M. Sun, and K. Wang, "A benchmark for automatic visual classification of clinical skin disease images," in European Conference on Computer Vision, Springer, Cham, pp.206-222, 2016.
  10. R. Sumithra, M. Suhil, and D. S. Guru, "Segmentation and classification of skin lesions for disease diagnosis," Procedia Computer Science, Vol.45, pp.76-85, 2015. https://doi.org/10.1016/j.procs.2015.03.090
  11. N. Hameed, F. Hameed, A. Shabut, S. Khan, S. Cirstea, and A. Hossain, "An intelligent computer-aided scheme for classifying multiple skin lesions," Computers, Vol.8, No.3, pp.62, 2019. https://doi.org/10.3390/computers8030062
  12. C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, "Inception-v4, inception-ResNet and the impact of residual connections on learning," in Proceedings of the AAAI Conference on Artificial Intelligence, Vol.31, No.1, 2017.
  13. K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.770-778, 2016.