• Title/Summary/Keyword: Sign Language Translation

Search Result 35, Processing Time 0.03 seconds

Application of Artificial Neural Network For Sign Language Translation

  • Cho, Jeong-Ran;Kim, Hyung-Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.2
    • /
    • pp.185-192
    • /
    • 2019
  • In the case of a hearing impaired person using sign language, there are many difficulties in communicating with a normal person who does not understand sign language. The sign language translation system is a system that enables communication between the hearing impaired person using sign language and the normal person who does not understand sign language in this situation. Previous studies on sign language translation systems for communication between normal people and hearing impaired people using sign language are classified into two types using video image system and shape input device. However, the existing sign language translation system does not solve such difficulties due to some problems. Existing sign language translation systems have some problems that they do not recognize various sign language expressions of sign language users and require special devices. Therefore, in this paper, a sign language translation system using an artificial neural network is devised to overcome the problems of the existing system.

A Study on the Forms and Characteristics of Korean Sign Language Translation According to Historical Changes (역사적 변천에 따른 한국수어 번역의 형태와 특성 연구)

  • Lee, Jun-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.5
    • /
    • pp.508-524
    • /
    • 2021
  • Innovative translation circumstances encouraged by scientific technique have become an element that increases realization and possibility of expanding sign language translation and Korean sign language translation is facing a new challenge and opportunity. This study raises following questions and search for answers. First, when and how did Korean sign language translation appear in the course of the historical changes in Korean sign language? Second, what is the form and characteristic of translation produced as a result of Korean sign language translation? Third, what is the present condition and prospect of Korean sign language translation? Accordingly, this study examined how Korean sign language translation was formed historically and the form and characteristics of Korean sign language translation using integrated literature review method. As a result of the study, first, the form and characteristics of Korean sign language translation classified according to the historical transition process into latent phase, formation phase, and expansion phase were revealed. Second, the forms and characteristics of Korean sign language translation according to the Korean sign language corpus project and machine translation were derived. In addition, it apprehends its present condition and proposes its future prospect.

Sign Language Image Recognition System Using Artificial Neural Network

  • Kim, Hyung-Hoon;Cho, Jeong-Ran
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.2
    • /
    • pp.193-200
    • /
    • 2019
  • Hearing impaired people are living in a voice culture area, but due to the difficulty of communicating with normal people using sign language, many people experience discomfort in daily life and social life and various disadvantages unlike their desires. Therefore, in this paper, we study a sign language translation system for communication between a normal person and a hearing impaired person using sign language and implement a prototype system for this. Previous studies on sign language translation systems for communication between normal people and hearing impaired people using sign language are classified into two types using video image system and shape input device. However, existing sign language translation systems have some problems that they do not recognize various sign language expressions of sign language users and require special devices. In this paper, we use machine learning method of artificial neural network to recognize various sign language expressions of sign language users. By using generalized smart phone and various video equipment for sign language image recognition, we intend to improve the usability of sign language translation system.

Sign2Gloss2Text-based Sign Language Translation with Enhanced Spatial-temporal Information Centered on Sign Language Movement Keypoints (수어 동작 키포인트 중심의 시공간적 정보를 강화한 Sign2Gloss2Text 기반의 수어 번역)

  • Kim, Minchae;Kim, Jungeun;Kim, Ha Young
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.10
    • /
    • pp.1535-1545
    • /
    • 2022
  • Sign language has completely different meaning depending on the direction of the hand or the change of facial expression even with the same gesture. In this respect, it is crucial to capture the spatial-temporal structure information of each movement. However, sign language translation studies based on Sign2Gloss2Text only convey comprehensive spatial-temporal information about the entire sign language movement. Consequently, detailed information (facial expression, gestures, and etc.) of each movement that is important for sign language translation is not emphasized. Accordingly, in this paper, we propose Spatial-temporal Keypoints Centered Sign2Gloss2Text Translation, named STKC-Sign2 Gloss2Text, to supplement the sequential and semantic information of keypoints which are the core of recognizing and translating sign language. STKC-Sign2Gloss2Text consists of two steps, Spatial Keypoints Embedding, which extracts 121 major keypoints from each image, and Temporal Keypoints Embedding, which emphasizes sequential information using Bi-GRU for extracted keypoints of sign language. The proposed model outperformed all Bilingual Evaluation Understudy(BLEU) scores in Development(DEV) and Testing(TEST) than Sign2Gloss2Text as the baseline, and in particular, it proved the effectiveness of the proposed methodology by achieving 23.19, an improvement of 1.87 based on TEST BLEU-4.

Design and Implementation of a Koran Text to Sign Language Translation System (한국어-수화 번역 시스템 설계)

  • Gwon, Gyeong-Hyeok;U, Yo-Seop;Min, Hong-Gi
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.3
    • /
    • pp.756-765
    • /
    • 2000
  • In this paper, a korean text to sign language translation system is designed and implemented for the hearing impaired people to learn letters and to have a conversation with normal people. We adopt the direct method for machine translation which uses morphological analysis and the dictionary search. And we define the necessary sign language dictionaries. Based on this processes, the system translate korean sentences to sign language moving picture. The proposed dictionaries are composed of the basic sign language dictionary, the compound sing language dictionary, and the resemble sign language dictionary. The basic sign language dictionary includes basic symbols and moving pictures of korean sign language. The compound sing language dictionary is composed of key-words of basic sign language. In addition, we offered the similar letters at the resemble sign language dictionary. The moving pictures of searched sign symbols are displayed on a screen in GIF formats by continuous motion of sign symbols or represented by the finger spelling based on the korean code analysis. The proposed system can provide quick sign language search and complement the lack of sign languages in the translation process by using the various sign language dictionaries which are characterized as korean sign language. In addition, to represent the sign language using GIF makes it possible to save the storage space of the sign language. In addition, to represent the sign language using GIF makes it possible to save storage space of the sign language dictionary.

  • PDF

Korean Text to Gloss: Self-Supervised Learning approach

  • Thanh-Vu Dang;Gwang-hyun Yu;Ji-yong Kim;Young-hwan Park;Chil-woo Lee;Jin-Young Kim
    • Smart Media Journal
    • /
    • v.12 no.1
    • /
    • pp.32-46
    • /
    • 2023
  • Natural Language Processing (NLP) has grown tremendously in recent years. Typically, bilingual, and multilingual translation models have been deployed widely in machine translation and gained vast attention from the research community. On the contrary, few studies have focused on translating between spoken and sign languages, especially non-English languages. Prior works on Sign Language Translation (SLT) have shown that a mid-level sign gloss representation enhances translation performance. Therefore, this study presents a new large-scale Korean sign language dataset, the Museum-Commentary Korean Sign Gloss (MCKSG) dataset, including 3828 pairs of Korean sentences and their corresponding sign glosses used in Museum-Commentary contexts. In addition, we propose a translation framework based on self-supervised learning, where the pretext task is a text-to-text from a Korean sentence to its back-translation versions, then the pre-trained network will be fine-tuned on the MCKSG dataset. Using self-supervised learning help to overcome the drawback of a shortage of sign language data. Through experimental results, our proposed model outperforms a baseline BERT model by 6.22%.

Addressing Low-Resource Problems in Statistical Machine Translation of Manual Signals in Sign Language (말뭉치 자원 희소성에 따른 통계적 수지 신호 번역 문제의 해결)

  • Park, Hancheol;Kim, Jung-Ho;Park, Jong C.
    • Journal of KIISE
    • /
    • v.44 no.2
    • /
    • pp.163-170
    • /
    • 2017
  • Despite the rise of studies in spoken to sign language translation, low-resource problems of sign language corpus have been rarely addressed. As a first step towards translating from spoken to sign language, we addressed the problems arising from resource scarcity when translating spoken language to manual signals translation using statistical machine translation techniques. More specifically, we proposed three preprocessing methods: 1) paraphrase generation, which increases the size of the corpora, 2) lemmatization, which increases the frequency of each word in the corpora and the translatability of new input words in spoken language, and 3) elimination of function words that are not glossed into manual signals, which match the corresponding constituents of the bilingual sentence pairs. In our experiments, we used different types of English-American sign language parallel corpora. The experimental results showed that the system with each method and the combination of the methods improved the quality of manual signals translation, regardless of the type of the corpora.

CNN-based Sign Language Translation Program for the Deaf (CNN기반의 청각장애인을 위한 수화번역 프로그램)

  • Hong, Kyeong-Chan;Kim, Hyung-Su;Han, Young-Hwan
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.22 no.4
    • /
    • pp.206-212
    • /
    • 2021
  • Society is developing more and more, and communication methods are developing in many ways. However, developed communication is a way for the non-disabled and has no effect on the deaf. Therefore, in this paper, a CNN-based sign language translation program is designed and implemented to help deaf people communicate. Sign language translation programs translate sign language images entered through WebCam according to meaning based on data. The sign language translation program uses 24,000 pieces of Korean vowel data produced directly and conducts U-Net segmentation to train effective classification models. In the implemented sign language translation program, 'ㅋ' showed the best performance among all sign language data with 97% accuracy and 99% F1-Score, while 'ㅣ' showed the highest performance among vowel data with 94% accuracy and 95.5% F1-Score.

Sign Language Dataset Built from S. Korean Government Briefing on COVID-19 (대한민국 정부의 코로나 19 브리핑을 기반으로 구축된 수어 데이터셋 연구)

  • Sim, Hohyun;Sung, Horyeol;Lee, Seungjae;Cho, Hyeonjoong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.8
    • /
    • pp.325-330
    • /
    • 2022
  • This paper conducts the collection and experiment of datasets for deep learning research on sign language such as sign language recognition, sign language translation, and sign language segmentation for Korean sign language. There exist difficulties for deep learning research of sign language. First, it is difficult to recognize sign languages since they contain multiple modalities including hand movements, hand directions, and facial expressions. Second, it is the absence of training data to conduct deep learning research. Currently, KETI dataset is the only known dataset for Korean sign language for deep learning. Sign language datasets for deep learning research are classified into two categories: Isolated sign language and Continuous sign language. Although several foreign sign language datasets have been collected over time. they are also insufficient for deep learning research of sign language. Therefore, we attempted to collect a large-scale Korean sign language dataset and evaluate it using a baseline model named TSPNet which has the performance of SOTA in the field of sign language translation. The collected dataset consists of a total of 11,402 image and text. Our experimental result with the baseline model using the dataset shows BLEU-4 score 3.63, which would be used as a basic performance of a baseline model for Korean sign language dataset. We hope that our experience of collecting Korean sign language dataset helps facilitate further research directions on Korean sign language.

E-book to sign-language translation program based on morpheme analysis (형태소 분석 기반 전자책 수화 번역 프로그램)

  • Han, Sol-Ee;Kim, Se-A;Hwang, Gyung-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.2
    • /
    • pp.461-467
    • /
    • 2017
  • As the number of smart devices increases, e-book contents and services are proliferating. However, the text based e-book is difficult for a hearing-impairment person to understand. In this paper, we developed an android based application in which we can choose an e-book text file and each sentence is translated to sign-language elements which are shown in videos that are retrieved from the sign-language contents server. We used the korean sentence to sign-language translation algorithm based on the morpheme analysis. The proposed translation algorithm consists of 3 stages. Firstly, some elements in a sentence are removed for typical sign-language usages. Secondly, the tense of the sentence and the expression alteration are applied. Finally, the honorific forms are considered and word positions in the sentence are revised. We also proposed a new method to evaluate the performance of the translation algorithm and demonstrated the superiority of the algorithm through the translation results of 100 reference sentences.