Browse > Article
http://dx.doi.org/10.23087/jkicsp.2021.22.4.009

CNN-based Sign Language Translation Program for the Deaf  

Hong, Kyeong-Chan (Department of Computer Information and Engineering, Sangji University)
Kim, Hyung-Su (Department of Computer Information and Engineering, Sangji University)
Han, Young-Hwan (Department of Information Communication Software Engineering, Sangji University)
Publication Information
Journal of the Institute of Convergence Signal Processing / v.22, no.4, 2021 , pp. 206-212 More about this Journal
Abstract
Society is developing more and more, and communication methods are developing in many ways. However, developed communication is a way for the non-disabled and has no effect on the deaf. Therefore, in this paper, a CNN-based sign language translation program is designed and implemented to help deaf people communicate. Sign language translation programs translate sign language images entered through WebCam according to meaning based on data. The sign language translation program uses 24,000 pieces of Korean vowel data produced directly and conducts U-Net segmentation to train effective classification models. In the implemented sign language translation program, 'ㅋ' showed the best performance among all sign language data with 97% accuracy and 99% F1-Score, while 'ㅣ' showed the highest performance among vowel data with 94% accuracy and 95.5% F1-Score.
Keywords
Sign Language; Deaf; AlexNet; U-Net; Communication;
Citations & Related Records
연도 인용수 순위
  • Reference
1 K. P. Jeon.(2020, Mar.). A Study of Communication Experience in the Job Adaptation Process of People with Hearing Impairment. Journal of Korean Society of Vocational Rehabilitation. 30(2), pp. 97-125.   DOI
2 H. S. Lee. et al.(2013, Aug.). Development of Sign Language Translation System using Motion Recognition of Kinect. Journal of Korea Institute do Convergence Signal Processing. 14(4), pp. 235-242.
3 J. Long. et al.(2015, Oct.). Fully convolutional networks for semantic segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431-3440.
4 O. Ronneberger. et al.(2015, Oct.). U-Net: Convolutional networks for biomedical image segmentation. arXiv. Lecture Notes in Computer Science, pp. 234-241.
5 H. Huang. et al.(2015, Mar.). Maximum F1-Score Discriminative Training Criterion for Automatic Mispronunciation Detection. IEEE/ACM Transactions on Audio, Speech, and Language Processing. 23(4), pp. 787-797.   DOI
6 I. H. Kim. et al.(2021, Apr.). A Study on Korea Sign Language Motion Recognition Using OpenPose Based on Deep Learning. Journal of Digital Contents Society. 22(4), pp. 681-687.   DOI
7 S. E. Han. et al.(2017, Feb.). E-book to sign-language translation program based on morpheme analysis. Journal of the Korea Institute of Information and Communication Engineering. 21(2), pp. 461-467.   DOI
8 M. O. Kim. et al.(2013, Jun.). A Phenomenological Study on the Communication Experiences of the Deaf. Journal of Korean Academy of Social Welfare. 49(4), pp. 1-26.
9 P. S. Jung. et al.(2015, Sep.). Design and Implementation of Finger Language Translation System using Raspberry Pi and Leap Motion. Journal of the Korea Institute of Information and Communication Engineering. 19(9), pp. 2006-2013.   DOI
10 J. R. Cho. et al.(2021, Apr.). Application of Artificial Neural Network For Sign Language Translation. Journal of Korea Soeity of Computer Information. 24(2), pp. 185-192.
11 A. Krizhevsky. et al.(2012, Jul.). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, pp. 1097-1105.
12 S. H. Park. et al.(2012, Mar.). IReceiver operating characteristic (ROC) curve: practical review for radiologists. Korean journal of radiology. 5(1), pp. 11-18.   DOI