• Title/Summary/Keyword: Language as Sign

Search Result 144, Processing Time 0.033 seconds

Research on Development of VR Realistic Sign Language Education Content Using Hand Tracking and Conversational AI (Hand Tracking과 대화형 AI를 활용한 VR 실감형 수어 교육 콘텐츠 개발 연구)

  • Jae-Sung Chun;Il-Young Moon
    • Journal of Advanced Navigation Technology
    • /
    • v.28 no.3
    • /
    • pp.369-374
    • /
    • 2024
  • This study aims to improve the accessibility and efficiency of sign language education for both hearing impaired and non-deaf people. To this end, we developed VR realistic sign language education content that integrates hand tracking technology and conversational AI. Through this content, users can learn sign language in real time and experience direct communication in a virtual environment. As a result of the study, it was confirmed that this integrated approach significantly improves immersion in sign language learning and contributes to lowering the barriers to sign language learning by providing learners with a deeper understanding. This presents a new paradigm for sign language education and shows how technology can change the accessibility and effectiveness of education.

Sign Language Generation with Animation by Adverbial Phrase Analysis (부사어를 활용한 수화 애니메이션 생성)

  • Kim, Sang-Ha;Park, Jong-C.
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.27-32
    • /
    • 2008
  • Sign languages, commonly used in aurally challenged communities, are a kind of visual language expressing sign words with motion. Spatiality and motility of a sign language are conveyed mainly via sign words as predicates. A predicate is modified by an adverbial phrase with an accompanying change in its semantics so that the adverbial phrase can also affect the overall spatiality and motility of expressions of a sign language. In this paper, we analyze the semantic features of adverbial phrases which may affect the motion-related semantics of a predicate in converting expressions in Korean into those in a sign language and propose a system that generates corresponding animation by utilizing these features.

  • PDF

Continuous Korean Sign Language Recognition using Automata-based Gesture Segmentation and Hidden Markov Model

  • Kim, Jung-Bae;Park, Kwang-Hyun;Bang, Won-Chul;Z.Zenn Bien;Kim, Jong-Sung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.105.2-105
    • /
    • 2001
  • This paper studies continuous Korean Sign Language (KSL) recognition using color vision. In recognizing gesture words such as sign language, it is a very difficult to segment a continuous sign into individual sign words since the patterns are very complicated and diverse. To solve this problem, we disassemble the KSL into 18 hand motion classes according to their patterns and represent the sign words as some combination of hand motions. Observing the speed and the change of speed of hand motion and using state automata, we reject unintentional gesture motions such as preparatory motion and meaningless movement between sign words. To recognize 18 hand motion classes we adopt Hidden Markov Model (HMM). Using these methods, we recognize 5 KSL sentences and obtain 94% recognition ratio.

  • PDF

A Study on the Korea Folktale of Sign Language Place Names (전국 수어(手語)지명의 유래에 관한 연구)

  • Park, Moon-Hee;Jeong, Wook-Chan
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.11
    • /
    • pp.664-675
    • /
    • 2019
  • This study examined Korean sign Language of the linguistic form and the etymological forms about the origins of the Korean national sign languages. The general sign language has been shown through previous research all of place names from Chinese character except Seoul and Lmsil. And then, Sign language's form and origins which are current using in order to examine what kind of feature were analysed through interviews and publications in Korean association of the deaf people. As a result, it was analysed that was composed majority. indigenous sign language Korean place names were made and used by deaf than loan word character of Chinese characters, Hangul and loanword. When we consider that place names were correspond to a precious cultural heritage, representing the history with the culture and identity of the relevant area, we can worth of preservation and transmission to the abundant iconicity in the name of Sui. On the other hand the indigenous sign language korea place manes can worth deaf culture or korean sign language. Even lf geographical characteristics of area have been changed or local product was disappeared in this situation by The origin of sign language reach in modern time local specialty by geographical form lt continued over generation. This can be regarded as the Korean sign language of the form in the way of visual. lt will be very valuable heritage in the preservation deaf culture.

Development of Sign Language Translation System using Motion Recognition of Kinect (키넥트의 모션 인식 기능을 이용한 수화번역 시스템 개발)

  • Lee, Hyun-Suk;Kim, Seung-Pil;Chung, Wan-Young
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.4
    • /
    • pp.235-242
    • /
    • 2013
  • In this paper, the system which can translate sign language through motion recognition of Kinect camera system is developed for the communication between hearing-impaired person or language disability, and normal person. The proposed algorithm which can translate sign language is developed by using core function of Kinect, and two ways such as length normalization and elbow normalization are introduced to improve accuracy of translating sign langauge for various sign language users. After that the sign language data is compared by chart in order to know how effective these ways of normalization. The accuracy of this program is demonstrated by entering 10 databases and translating sign languages ranging from simple signs to complex signs. In addition, the reliability of translating sign language is improved by applying this program to people who have various body shapes and fixing measure errors in body shapes.

A Low-Cost Speech to Sign Language Converter

  • Le, Minh;Le, Thanh Minh;Bui, Vu Duc;Truong, Son Ngoc
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.3
    • /
    • pp.37-40
    • /
    • 2021
  • This paper presents a design of a speech to sign language converter for deaf and hard of hearing people. The device is low-cost, low-power consumption, and it can be able to work entirely offline. The speech recognition is implemented using an open-source API, Pocketsphinx library. In this work, we proposed a context-oriented language model, which measures the similarity between the recognized speech and the predefined speech to decide the output. The output speech is selected from the recommended speech stored in the database, which is the best match to the recognized speech. The proposed context-oriented language model can improve the speech recognition rate by 21% for working entirely offline. A decision module based on determining the similarity between the two texts using Levenshtein distance decides the output sign language. The output sign language corresponding to the recognized speech is generated as a set of sequential images. The speech to sign language converter is deployed on a Raspberry Pi Zero board for low-cost deaf assistive devices.

Soft Sign Language Expression Method of 3D Avatar (3D 아바타의 자연스러운 수화 동작 표현 방법)

  • Oh, Young-Joon;Jang, Hyo-Young;Jung, Jin-Woo;Park, Kwang-Hyun;Kim, Dae-Jin;Bien, Zeung-Nam
    • The KIPS Transactions:PartB
    • /
    • v.14B no.2
    • /
    • pp.107-118
    • /
    • 2007
  • This paper proposes a 3D avatar which expresses sign language in a very using lips, facial expression, complexion, pupil motion and body motion as well as hand shape, Hand posture and hand motion to overcome the limitation of conventional sign language avatars from a deaf's viewpoint. To describe motion data of hand and other body components structurally and enhance the performance of databases, we introduce the concept of a hyper sign sentence. We show the superiority of the developed system by a usability test through a questionnaire survey.

Human-like sign-language learning method using deep learning

  • Ji, Yangho;Kim, Sunmok;Kim, Young-Joo;Lee, Ki-Baek
    • ETRI Journal
    • /
    • v.40 no.4
    • /
    • pp.435-445
    • /
    • 2018
  • This paper proposes a human-like sign-language learning method that uses a deep-learning technique. Inspired by the fact that humans can learn sign language from just a set of pictures in a book, in the proposed method, the input data are pre-processed into an image. In addition, the network is partially pre-trained to imitate the preliminarily obtained knowledge of humans. The learning process is implemented with a well-known network, that is, a convolutional neural network. Twelve sign actions are learned in 10 situations, and can be recognized with an accuracy of 99% in scenarios with low-cost equipment and limited data. The results show that the system is highly practical, as well as accurate and robust.

Effects of Mothers' Nurturing Attitude and Mothers' Sign Language Level on the Depression of Hearing Impairment Children (청각장애 아동의 우울에 대한 어머니의 양육태도와 수화수준의 영향)

  • Choi, Young-Hee;Cho, Moon-Kyo
    • The Korean Journal of Community Living Science
    • /
    • v.23 no.1
    • /
    • pp.41-50
    • /
    • 2012
  • This study was performed to understand the depression of children with hearing impairment with relation to their mothers' nurturing attitude and sign language level. The subjects were 131 hearing impaired children aged from 9 to 16 years and their mothers, who had no hearing impairments. The children's depression was assessed by CDI(Kovacs 1983) adapted by Cho and Lee(1990), and the maternal attitude was measured through the instrument developed by Oh and Lee(1982) and revised by Lim(1987). The results were as follows. First, the girls' depression was higher than the boys', and children in a dormitory type of school showed higher depression than those in a general type of school. Second, children's depression did not show differences according to mother-child communication methods but differed according to mothers' sign language level. Children whose mothers had high level of sign language showed the highest depression and those whose mothers had beginning level of sign language showed the lowest depression. And mothers' affective, goal- achieving and rational attitude were negatively related with children's depression. Third, the depression of hearing impairment children was influenced mainly by the maternal affective attitude, and the next order was the type of school the children attend.

SEMANTIC FEATURE DETECTION FOR REAL-TIME IMAGE TRANSMISSION OF SIGN LANGUAGE AND FINGER SPELLING

  • Hou, Jin;Aoki, Yoshinao
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1662-1665
    • /
    • 2002
  • This paper proposes a novel semantic feature detection (SFD) method for real-time image transmission of sign language and finger spelling. We extract semantic information as an interlingua from input text by natural language processing, and then transmit the semantic feature detection, which actually is a parameterized action representation, to the 3-D articulated humanoid models prepared in each client in remote locations. Once the SFD is received, the virtual human will be animated by the synthesized SFD. The experimental results based on Japanese sign langauge and Chinese sign langauge demonstrate that this algorithm is effective in real-time image delivery of sign language and finger spelling.

  • PDF