DOI QR코드

DOI QR Code

Efficient Sign Language Recognition and Classification Using African Buffalo Optimization Using Support Vector Machine System

  • Karthikeyan M. P. (School of CS & IT, JAIN (Deemed-to-be University)) ;
  • Vu Cao Lam (Faculty of Information Technolgy, Haiphong University) ;
  • Dac-Nhuong Le (Faculty of Information Technolgy, Haiphong University)
  • Received : 2024.06.05
  • Published : 2024.06.30

Abstract

Communication with the deaf has always been crucial. Deaf and hard-of-hearing persons can now express their thoughts and opinions to teachers through sign language, which has become a universal language and a very effective tool. This helps to improve their education. This facilitates and simplifies the referral procedure between them and the teachers. There are various bodily movements used in sign language, including those of arms, legs, and face. Pure expressiveness, proximity, and shared interests are examples of nonverbal physical communication that is distinct from gestures that convey a particular message. The meanings of gestures vary depending on your social or cultural background and are quite unique. Sign language prediction recognition is a highly popular and Research is ongoing in this area, and the SVM has shown value. Research in a number of fields where SVMs struggle has encouraged the development of numerous applications, such as SVM for enormous data sets, SVM for multi-classification, and SVM for unbalanced data sets.Without a precise diagnosis of the signs, right control measures cannot be applied when they are needed. One of the methods that is frequently utilized for the identification and categorization of sign languages is image processing. African Buffalo Optimization using Support Vector Machine (ABO+SVM) classification technology is used in this work to help identify and categorize peoples' sign languages. Segmentation by K-means clustering is used to first identify the sign region, after which color and texture features are extracted. The accuracy, sensitivity, Precision, specificity, and F1-score of the proposed system African Buffalo Optimization using Support Vector Machine (ABOSVM) are validated against the existing classifiers SVM, CNN, and PSO+ANN.

Keywords

References

  1. Zhou, H., Zhou, W., Zhou, Y., & Li, H. (2020, April). Spatial-temporal multi-cue network for continuous sign language recognition. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 07, pp. 13009-13016).
  2. Mittal, A., Kumar, P., Roy, P. P., Balasubramanian, R., &Chaudhuri, B. B. (2019). A modified LSTM model for continuous sign language recognition using leap motion. IEEE Sensors Journal, 19(16), 7056-7063.
  3. El Ghoul, O., & Othman, A. (2022, March). Virtual reality for educating Sign Language using signing avatar: The future of creative learning for deaf students. In 2022 IEEE Global Engineering Education Conference (EDUCON) (pp. 1269-1274). IEEE.
  4. Muzata, K. K. (2020). Interrogating parental participation in the education and general development of their deaf children in Zambia. Afr. Disability Rts. YB, 8, 71.
  5. Singh, A., Singh, S. K., & Mittal, A. (2022). A review on dataset acquisition techniques in gesture recognition from Indian sign language. Advances in Data Computing, Communication and Security: Proceedings of I3CS2021, 305-313.
  6. Rastgoo, R., Kiani, K., &Escalera, S. (2021). Sign language recognition: A deep survey. Expert Systems with Applications, 164, 113794.
  7. Camgoz, N. C., Koller, O., Hadfield, S., & Bowden, R. (2020). Sign language transformers: Joint end-to-end sign language recognition and translation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 10023-10033).
  8. Mittal, A., Kumar, P., Roy, P. P., Balasubramanian, R., &Chaudhuri, B. B. (2019). A modified LSTM model for continuous sign language recognition using leap motion. IEEE Sensors Journal, 19(16), 7056-7063.
  9. Li, D., Rodriguez, C., Yu, X., & Li, H. (2020). Word-level deep sign language recognition from video: A new largescale dataset and methods comparison. In Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp. 1459-1469).
  10. Wadhawan, A., & Kumar, P. (2020). Deep learning-based sign language recognition system for static signs. Neural computing and applications, 32, 7957-7968.
  11. Bazarevsky, V., Grishchenko, I., Raveendran, K., Zhu, T., Zhang, F., &Grundmann, M. (2020). Blazepose: On-device real-time body pose tracking. arXiv preprint arXiv:2006.10204.
  12. Krebs, J., Roehm, D., Wilbur, R. B., &Malaia, E. A. (2021). Age of sign language acquisition has lifelong effect on syntactic preferences in sign language users. International journal of behavioral development, 45(5), 397-408.
  13. De Coster, M., Shterionov, D., Van Herreweghe, M., &Dambre, J. (2023). Machine translation from signed to spoken languages: State of the art and challenges. Universal Access in the Information Society, 1-27.
  14. JyothiRatnam, D., Soman, K. P., Bijimol, T. K., Priya, M. G., &Premjith, B. (2021). Hybrid machine translation system for the translation of simple english prepositions and periphrastic causative constructions from english to hindi. Applications in Ubiquitous Computing, 247-263.
  15. Murtagh, I., Nogales, V. U., &Blat, J. (2022, September). Sign language machine translation and the sign language lexicon: A linguistically informed approach. In Proceedings of the 15th biennial conference of the Association for Machine Translation in the Americas (Volume 1: Research Track) (pp. 240-251).
  16. Angelova, G., Avramidis, E., &Moller, S. (2022, May). Using neural machine translation methods for sign language translation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop (pp. 273-284).
  17. Nair, J., Nithya, R., &VinodJincy, M. K. (2020). Design of a morphological generator for an English to Indian languages in a declension rule-based machine translation system. In Advances in Electrical and Computer Technologies: Select Proceedings of ICAECT 2019 (pp. 247-258). Springer Singapore.
  18. Othman, A., &Jemni, M. (2019). Designing high accuracy statistical machine translation for sign language using parallel corpus: case study English and American Sign Language. Journal of Information Technology Research (JITR), 12(2), 134-158.
  19. Seligman, M. (2019). The evolving treatment of semantics in machine translation. Adv. Empir. Transl. Stud. Dev. Transl. Resour. Technol., 53.
  20. Chen, K., Wang, R., Utiyama, M., &Sumita, E. (2020, July). Content word aware neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (pp. 358-364).
  21. Le, D. N., Nguyen, G. N., Bhateja, V., & Satapathy, S. C. (2017). Optimizing feature selection in video-based recognition using Max-Min Ant System for the online video contextual advertisement user-oriented system. Journal of computational science, 21, 361-370.
  22. AL-kubaisy, W. J., Yousif, M., Al-Khateeb, B., Mahmood, M., & Le, D. N. (2021). The red colobuses monkey: a new nature-inspired metaheuristic optimization algorithm. Int J Comput Intell Syst, 14(1), 1108-1118.
  23. Le, D. N. (2017). A New Ant Algorithm for Optimal Service Selection with End-to-End QoS Constraints. Journal of Internet Technology, 18(5), 1017-1030.
  24. Dey, A., Biswas, S., & Le, D. N. (2023). Recognition of Human Interactions in Still Images using AdaptiveDRNet with Multi-level Attention. International Journal of Advanced Computer Science and Applications, 14(10).
  25. Dey, A., Biswas, S., & Le, D. N (2024). Workout action recognition in video streams using an attention driven residual DC-GRU network. Computers, Materials & Continua, 79(2), 3067-3087.