• Title/Summary/Keyword: Korean Sign Languages

Search Result 23, Processing Time 0.026 seconds

Korean Text to Gloss: Self-Supervised Learning approach

  • Thanh-Vu Dang;Gwang-hyun Yu;Ji-yong Kim;Young-hwan Park;Chil-woo Lee;Jin-Young Kim
    • Smart Media Journal
    • /
    • v.12 no.1
    • /
    • pp.32-46
    • /
    • 2023
  • Natural Language Processing (NLP) has grown tremendously in recent years. Typically, bilingual, and multilingual translation models have been deployed widely in machine translation and gained vast attention from the research community. On the contrary, few studies have focused on translating between spoken and sign languages, especially non-English languages. Prior works on Sign Language Translation (SLT) have shown that a mid-level sign gloss representation enhances translation performance. Therefore, this study presents a new large-scale Korean sign language dataset, the Museum-Commentary Korean Sign Gloss (MCKSG) dataset, including 3828 pairs of Korean sentences and their corresponding sign glosses used in Museum-Commentary contexts. In addition, we propose a translation framework based on self-supervised learning, where the pretext task is a text-to-text from a Korean sentence to its back-translation versions, then the pre-trained network will be fine-tuned on the MCKSG dataset. Using self-supervised learning help to overcome the drawback of a shortage of sign language data. Through experimental results, our proposed model outperforms a baseline BERT model by 6.22%.

Vision- Based Finger Spelling Recognition for Korean Sign Language

  • Park Jun;Lee Dae-hyun
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.6
    • /
    • pp.768-775
    • /
    • 2005
  • For sign languages are main communication means among hearing-impaired people, there are communication difficulties between speaking-oriented people and sign-language-oriented people. Automated sign-language recognition may resolve these communication problems. In sign languages, finger spelling is used to spell names and words that are not listed in the dictionary. There have been research activities for gesture and posture recognition using glove-based devices. However, these devices are often expensive, cumbersome, and inadequate for recognizing elaborate finger spelling. Use of colored patches or gloves also cause uneasiness. In this paper, a vision-based finger spelling recognition system is introduced. In our method, captured hand region images were separated from the background using a skin detection algorithm assuming that there are no skin-colored objects in the background. Then, hand postures were recognized using a two-dimensional grid analysis method. Our recognition system is not sensitive to the size or the rotation of the input posture images. By optimizing the weights of the posture features using a genetic algorithm, our system achieved high accuracy that matches other systems using devices or colored gloves. We applied our posture recognition system for detecting Korean Sign Language, achieving better than $93\%$ accuracy.

  • PDF

Enhanced Sign Language Transcription System via Hand Tracking and Pose Estimation

  • Kim, Jung-Ho;Kim, Najoung;Park, Hancheol;Park, Jong C.
    • Journal of Computing Science and Engineering
    • /
    • v.10 no.3
    • /
    • pp.95-101
    • /
    • 2016
  • In this study, we propose a new system for constructing parallel corpora for sign languages, which are generally under-resourced in comparison to spoken languages. In order to achieve scalability and accessibility regarding data collection and corpus construction, our system utilizes deep learning-based techniques and predicts depth information to perform pose estimation on hand information obtainable from video recordings by a single RGB camera. These estimated poses are then transcribed into expressions in SignWriting. We evaluate the accuracy of hand tracking and hand pose estimation modules of our system quantitatively, using the American Sign Language Image Dataset and the American Sign Language Lexicon Video Dataset. The evaluation results show that our transcription system has a high potential to be successfully employed in constructing a sizable sign language corpus using various types of video resources.

A Study on the Korea Folktale of Sign Language Place Names (전국 수어(手語)지명의 유래에 관한 연구)

  • Park, Moon-Hee;Jeong, Wook-Chan
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.11
    • /
    • pp.664-675
    • /
    • 2019
  • This study examined Korean sign Language of the linguistic form and the etymological forms about the origins of the Korean national sign languages. The general sign language has been shown through previous research all of place names from Chinese character except Seoul and Lmsil. And then, Sign language's form and origins which are current using in order to examine what kind of feature were analysed through interviews and publications in Korean association of the deaf people. As a result, it was analysed that was composed majority. indigenous sign language Korean place names were made and used by deaf than loan word character of Chinese characters, Hangul and loanword. When we consider that place names were correspond to a precious cultural heritage, representing the history with the culture and identity of the relevant area, we can worth of preservation and transmission to the abundant iconicity in the name of Sui. On the other hand the indigenous sign language korea place manes can worth deaf culture or korean sign language. Even lf geographical characteristics of area have been changed or local product was disappeared in this situation by The origin of sign language reach in modern time local specialty by geographical form lt continued over generation. This can be regarded as the Korean sign language of the form in the way of visual. lt will be very valuable heritage in the preservation deaf culture.

3D model for korean-japanese sign language image communication (한-일 수화 영상통신을 위한 3차원 모델)

  • ;;Yoshinao Aoki
    • Proceedings of the IEEK Conference
    • /
    • 1998.06a
    • /
    • pp.929-932
    • /
    • 1998
  • In this paper we propose a method of representing emotional experessions and lip shapes for sign language communication using 3-dimensional model. At first we employ the action units (AU) of facial action coding system(FACS) to display all shapes. Then we define 11 basic lip shapes and sounding times of each components in a syllable in order to synthesize the lip shapes more precisely for korean characters. Experimental results show that the proposed method could be used efficiently for the sign language image communication between different languages.

  • PDF

Design and Implementation of a Koran Text to Sign Language Translation System (한국어-수화 번역 시스템 설계)

  • Gwon, Gyeong-Hyeok;U, Yo-Seop;Min, Hong-Gi
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.3
    • /
    • pp.756-765
    • /
    • 2000
  • In this paper, a korean text to sign language translation system is designed and implemented for the hearing impaired people to learn letters and to have a conversation with normal people. We adopt the direct method for machine translation which uses morphological analysis and the dictionary search. And we define the necessary sign language dictionaries. Based on this processes, the system translate korean sentences to sign language moving picture. The proposed dictionaries are composed of the basic sign language dictionary, the compound sing language dictionary, and the resemble sign language dictionary. The basic sign language dictionary includes basic symbols and moving pictures of korean sign language. The compound sing language dictionary is composed of key-words of basic sign language. In addition, we offered the similar letters at the resemble sign language dictionary. The moving pictures of searched sign symbols are displayed on a screen in GIF formats by continuous motion of sign symbols or represented by the finger spelling based on the korean code analysis. The proposed system can provide quick sign language search and complement the lack of sign languages in the translation process by using the various sign language dictionaries which are characterized as korean sign language. In addition, to represent the sign language using GIF makes it possible to save the storage space of the sign language. In addition, to represent the sign language using GIF makes it possible to save storage space of the sign language dictionary.

  • PDF

Sign Language Generation with Animation by Adverbial Phrase Analysis (부사어를 활용한 수화 애니메이션 생성)

  • Kim, Sang-Ha;Park, Jong-C.
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.27-32
    • /
    • 2008
  • Sign languages, commonly used in aurally challenged communities, are a kind of visual language expressing sign words with motion. Spatiality and motility of a sign language are conveyed mainly via sign words as predicates. A predicate is modified by an adverbial phrase with an accompanying change in its semantics so that the adverbial phrase can also affect the overall spatiality and motility of expressions of a sign language. In this paper, we analyze the semantic features of adverbial phrases which may affect the motion-related semantics of a predicate in converting expressions in Korean into those in a sign language and propose a system that generates corresponding animation by utilizing these features.

  • PDF

A Gesture-Emotion Keyframe Editor for sign-Language Communication between Avatars of Korean and Japanese on the Internet

  • Kim, Sang-Woon;Lee, Yung-Who;Lee, Jong-Woo;Aoki, Yoshinao
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.831-834
    • /
    • 2000
  • The sign-language tan be used a9 an auxiliary communication means between avatars of different languages. At that time an intelligent communication method can be also utilized to achieve real-time communication, where intelligently coded data (joint angles for arm gestures and action units for facial emotions) are transmitted instead of real pictures. In this paper we design a gesture-emotion keyframe editor to provide the means to get easily the parameter values. To calculate both joint angles of the arms and the hands and to goner-ate the in keyframes realistically, a transformation matrix of inverse kinematics and some kinds of constraints are applied. Also, to edit emotional expressions efficiently, a comic-style facial model having only eyebrows, eyes nose, and mouth is employed. Experimental results show a possibility that the editor could be used for intelligent sign-language image communications between different lan-guages.

  • PDF

Sign Language Dataset Built from S. Korean Government Briefing on COVID-19 (대한민국 정부의 코로나 19 브리핑을 기반으로 구축된 수어 데이터셋 연구)

  • Sim, Hohyun;Sung, Horyeol;Lee, Seungjae;Cho, Hyeonjoong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.8
    • /
    • pp.325-330
    • /
    • 2022
  • This paper conducts the collection and experiment of datasets for deep learning research on sign language such as sign language recognition, sign language translation, and sign language segmentation for Korean sign language. There exist difficulties for deep learning research of sign language. First, it is difficult to recognize sign languages since they contain multiple modalities including hand movements, hand directions, and facial expressions. Second, it is the absence of training data to conduct deep learning research. Currently, KETI dataset is the only known dataset for Korean sign language for deep learning. Sign language datasets for deep learning research are classified into two categories: Isolated sign language and Continuous sign language. Although several foreign sign language datasets have been collected over time. they are also insufficient for deep learning research of sign language. Therefore, we attempted to collect a large-scale Korean sign language dataset and evaluate it using a baseline model named TSPNet which has the performance of SOTA in the field of sign language translation. The collected dataset consists of a total of 11,402 image and text. Our experimental result with the baseline model using the dataset shows BLEU-4 score 3.63, which would be used as a basic performance of a baseline model for Korean sign language dataset. We hope that our experience of collecting Korean sign language dataset helps facilitate further research directions on Korean sign language.

Modularity and Modality in ‘Second’ Language Learning: The Case of a Polyglot Savant

  • Smith, Neil
    • Korean Journal of English Language and Linguistics
    • /
    • v.3 no.3
    • /
    • pp.411-426
    • /
    • 2003
  • I report on the case of a polyglot ‘savant’ (C), who is mildly autistic, severely apraxic, and of limited intellectual ability; yet who can read, write, speak and understand about twenty languages. I outline his abilities, both verbal and non-verbal, noting the asymmetry between his linguistic ability and his general intellectual inability and, within the former, between his unlimited morphological and lexical prowess as opposed to his limited syntax. I then spell out the implications of these findings for modularity. C's unique profile suggested a further project in which we taught him British Sign Language. I report on this work, paying particular attention to the learning and use of classifiers, and discuss its relevance to the issue of modality: whether the human language faculty is preferentially tied to the oral domain, or is ‘modality-neutral’ as between the spoken and the visual modes.

  • PDF