• Title/Summary/Keyword: Korean sign language

Search Result 156, Processing Time 0.027 seconds

CNN-based Sign Language Translation Program for the Deaf (CNN기반의 청각장애인을 위한 수화번역 프로그램)

  • Hong, Kyeong-Chan;Kim, Hyung-Su;Han, Young-Hwan
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.22 no.4
    • /
    • pp.206-212
    • /
    • 2021
  • Society is developing more and more, and communication methods are developing in many ways. However, developed communication is a way for the non-disabled and has no effect on the deaf. Therefore, in this paper, a CNN-based sign language translation program is designed and implemented to help deaf people communicate. Sign language translation programs translate sign language images entered through WebCam according to meaning based on data. The sign language translation program uses 24,000 pieces of Korean vowel data produced directly and conducts U-Net segmentation to train effective classification models. In the implemented sign language translation program, 'ㅋ' showed the best performance among all sign language data with 97% accuracy and 99% F1-Score, while 'ㅣ' showed the highest performance among vowel data with 94% accuracy and 95.5% F1-Score.

Design and Implementation of Data Acquisition and Storage Systems for Multi-view Points Sign Language (다시점 수어 데이터 획득 및 저장 시스템 설계 및 구현)

  • Kim, Geunmo;Kim, Bongjae
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.3
    • /
    • pp.63-68
    • /
    • 2022
  • There are 395,789 people with hearing impairment in Korea, according to the 2021 Disability Statistics Annual Report by the Korea Institute for the Development of Disabled Persons. These people are experiencing a lot of inconvenience through hearing impairment, and many studies related to recognition and translation of Korean sign language are being conducted to solve this problem. In sign language recognition and translation research, collecting sign language data has many difficulties because few people use sign language professionally. In addition, most of the existed data is sign language data taken from the front of the speaker. To solve this problem, in this paper, we designed and developed a storage system that can collect sign language data based on multi-view points in real-time, rather than a single point, and store and manage it with high usability.

Enhanced Sign Language Transcription System via Hand Tracking and Pose Estimation

  • Kim, Jung-Ho;Kim, Najoung;Park, Hancheol;Park, Jong C.
    • Journal of Computing Science and Engineering
    • /
    • v.10 no.3
    • /
    • pp.95-101
    • /
    • 2016
  • In this study, we propose a new system for constructing parallel corpora for sign languages, which are generally under-resourced in comparison to spoken languages. In order to achieve scalability and accessibility regarding data collection and corpus construction, our system utilizes deep learning-based techniques and predicts depth information to perform pose estimation on hand information obtainable from video recordings by a single RGB camera. These estimated poses are then transcribed into expressions in SignWriting. We evaluate the accuracy of hand tracking and hand pose estimation modules of our system quantitatively, using the American Sign Language Image Dataset and the American Sign Language Lexicon Video Dataset. The evaluation results show that our transcription system has a high potential to be successfully employed in constructing a sizable sign language corpus using various types of video resources.

Ral-time Recognition of Continuous KSL & KMA using Automata and Fuzzy Techniques (한글 수화 및 지화의 실시간 인식 시스템 구현)

  • Lee, Chan-Su;Kim, Jong-Sung;Park, Gyu-Tae;Bien, Zeung-Nam;Jang, Won;Kim, Sung-Kwon
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1996.10a
    • /
    • pp.333-336
    • /
    • 1996
  • The sign language is a method of communication for deaf person. For sign communication, sign language and manual alphabet are used continuously. In this paper is proposed a system which recognize Korean sign language(KSL) and Korean manual alphabet(KMA) continuously. For recognizing KSL and KMA, basic elements for sign language, namely, the 14 hand directions, 23 hand postures, and 14 hand orientations are used. At first, this system recognize current motion state using speed and change of speed in motion by state automata. Using state, basic element classifiers using Fuzzy Min-Max Neural Network and Fuzzy Rule are executed. Meaning of signed gesture is selected by using basic elements which was recognized.

  • PDF

Sign Language Translation Using Deep Convolutional Neural Networks

  • Abiyev, Rahib H.;Arslan, Murat;Idoko, John Bush
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.2
    • /
    • pp.631-653
    • /
    • 2020
  • Sign language is a natural, visually oriented and non-verbal communication channel between people that facilitates communication through facial/bodily expressions, postures and a set of gestures. It is basically used for communication with people who are deaf or hard of hearing. In order to understand such communication quickly and accurately, the design of a successful sign language translation system is considered in this paper. The proposed system includes object detection and classification stages. Firstly, Single Shot Multi Box Detection (SSD) architecture is utilized for hand detection, then a deep learning structure based on the Inception v3 plus Support Vector Machine (SVM) that combines feature extraction and classification stages is proposed to constructively translate the detected hand gestures. A sign language fingerspelling dataset is used for the design of the proposed model. The obtained results and comparative analysis demonstrate the efficiency of using the proposed hybrid structure in sign language translation.

Continuous Korean Sign Language Recognition using Automata-based Gesture Segmentation and Hidden Markov Model

  • Kim, Jung-Bae;Park, Kwang-Hyun;Bang, Won-Chul;Z.Zenn Bien;Kim, Jong-Sung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.105.2-105
    • /
    • 2001
  • This paper studies continuous Korean Sign Language (KSL) recognition using color vision. In recognizing gesture words such as sign language, it is a very difficult to segment a continuous sign into individual sign words since the patterns are very complicated and diverse. To solve this problem, we disassemble the KSL into 18 hand motion classes according to their patterns and represent the sign words as some combination of hand motions. Observing the speed and the change of speed of hand motion and using state automata, we reject unintentional gesture motions such as preparatory motion and meaningless movement between sign words. To recognize 18 hand motion classes we adopt Hidden Markov Model (HMM). Using these methods, we recognize 5 KSL sentences and obtain 94% recognition ratio.

  • PDF

Implementation of Real-time Recognition System for Korean Sign Language (한글 수화의 실시간 인식 시스템의 구현)

  • Han Young-Hwan
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.4
    • /
    • pp.85-93
    • /
    • 2005
  • In this paper, we propose recognition system which tracks the unmarked hand of a person performing sign language in complex background. First of all, we measure entropy for the difference image between continuous frames. Using a color information that is similar to a skin color in candidate region which has high value, we extract hand region only from background image. On the extracted hand region, we detect a contour and recognize sign language by applying improved centroidal profile method. In the experimental results for 6 kinds of sing language movement, unlike existing methods, we can stably recognize sign language in complex background and illumination changes without marker. Also, it shows the recognition rate with more than 95% for person and $90\sim100%$ for each movement at 15 frames/second.

  • PDF

A Study on The Korean Sign Language platform base on DirectX (DirectX 기반의 KSL 실행 플랫폼의 개발과 구현)

  • Ku, Ja-Hyo;Ryoo, Yun-Kyoo
    • Journal of the Korea society of information convergence
    • /
    • v.1 no.1
    • /
    • pp.25-32
    • /
    • 2008
  • The developments of digital and multimedia have been increasing the demand of humans desiring the acquisition of real and intuitive information and diversified expressions and the use of animation characters are continually increasing in mass media. With the development of graphic techniques, these expressions of animation characters have become enabled of real and smooth representations. Although, in general, even fine movements of the hair of characters can be expressed using diverse data input devices, the studies on the multimedia technologies for disabled persons are quite insufficient. In this paper, Directness it extracts the data which it move sign language and It propose the method which creates a Korea sign language platform.

  • PDF

3D model for korean-japanese sign language image communication (한-일 수화 영상통신을 위한 3차원 모델)

  • ;;Yoshinao Aoki
    • Proceedings of the IEEK Conference
    • /
    • 1998.06a
    • /
    • pp.929-932
    • /
    • 1998
  • In this paper we propose a method of representing emotional experessions and lip shapes for sign language communication using 3-dimensional model. At first we employ the action units (AU) of facial action coding system(FACS) to display all shapes. Then we define 11 basic lip shapes and sounding times of each components in a syllable in order to synthesize the lip shapes more precisely for korean characters. Experimental results show that the proposed method could be used efficiently for the sign language image communication between different languages.

  • PDF

Addressing Low-Resource Problems in Statistical Machine Translation of Manual Signals in Sign Language (말뭉치 자원 희소성에 따른 통계적 수지 신호 번역 문제의 해결)

  • Park, Hancheol;Kim, Jung-Ho;Park, Jong C.
    • Journal of KIISE
    • /
    • v.44 no.2
    • /
    • pp.163-170
    • /
    • 2017
  • Despite the rise of studies in spoken to sign language translation, low-resource problems of sign language corpus have been rarely addressed. As a first step towards translating from spoken to sign language, we addressed the problems arising from resource scarcity when translating spoken language to manual signals translation using statistical machine translation techniques. More specifically, we proposed three preprocessing methods: 1) paraphrase generation, which increases the size of the corpora, 2) lemmatization, which increases the frequency of each word in the corpora and the translatability of new input words in spoken language, and 3) elimination of function words that are not glossed into manual signals, which match the corresponding constituents of the bilingual sentence pairs. In our experiments, we used different types of English-American sign language parallel corpora. The experimental results showed that the system with each method and the combination of the methods improved the quality of manual signals translation, regardless of the type of the corpora.