• Title/Summary/Keyword: sign recognition

Search Result 250, Processing Time 0.023 seconds

Sign Language Dataset Built from S. Korean Government Briefing on COVID-19 (대한민국 정부의 코로나 19 브리핑을 기반으로 구축된 수어 데이터셋 연구)

  • Sim, Hohyun;Sung, Horyeol;Lee, Seungjae;Cho, Hyeonjoong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.8
    • /
    • pp.325-330
    • /
    • 2022
  • This paper conducts the collection and experiment of datasets for deep learning research on sign language such as sign language recognition, sign language translation, and sign language segmentation for Korean sign language. There exist difficulties for deep learning research of sign language. First, it is difficult to recognize sign languages since they contain multiple modalities including hand movements, hand directions, and facial expressions. Second, it is the absence of training data to conduct deep learning research. Currently, KETI dataset is the only known dataset for Korean sign language for deep learning. Sign language datasets for deep learning research are classified into two categories: Isolated sign language and Continuous sign language. Although several foreign sign language datasets have been collected over time. they are also insufficient for deep learning research of sign language. Therefore, we attempted to collect a large-scale Korean sign language dataset and evaluate it using a baseline model named TSPNet which has the performance of SOTA in the field of sign language translation. The collected dataset consists of a total of 11,402 image and text. Our experimental result with the baseline model using the dataset shows BLEU-4 score 3.63, which would be used as a basic performance of a baseline model for Korean sign language dataset. We hope that our experience of collecting Korean sign language dataset helps facilitate further research directions on Korean sign language.

Hierarchical Hand Pose Model for Hand Expression Recognition (손 표현 인식을 위한 계층적 손 자세 모델)

  • Heo, Gyeongyong;Song, Bok Deuk;Kim, Ji-Hong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.10
    • /
    • pp.1323-1329
    • /
    • 2021
  • For hand expression recognition, hand pose recognition based on the static shape of the hand and hand gesture recognition based on the dynamic hand movement are used together. In this paper, we propose a hierarchical hand pose model based on finger position and shape for hand expression recognition. For hand pose recognition, a finger model representing the finger state and a hand pose model using the finger state are hierarchically constructed, which is based on the open source MediaPipe. The finger model is also hierarchically constructed using the bending of one finger and the touch of two fingers. The proposed model can be used for various applications of transmitting information through hands, and its usefulness was verified by applying it to number recognition in sign language. The proposed model is expected to have various applications in the user interface of computers other than sign language recognition.

Sign Language Recognition Using ART2 Algorithm (ART2 알고리즘을 이용한 수화 인식)

  • Kim, Kwang-Baek;Woo, Young-Woon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.5
    • /
    • pp.937-941
    • /
    • 2008
  • People who have hearing difficulties use sign language as the most important communication method, and they can broaden personal relations and manage their everyday lives without inconvenience through sign language. But they suffer from absence of interpolation between normal people and people who have hearing difficulties in increasing video chatting or video communication services by recent growth of internet communication. In this paper, we proposed a sign language recognition method in order to solve such a problem. In the proposed method, regions of two hands are extracted by tracking of two hands using RGB, YUV and HSI color information from a sign language image acquired from a video camera and by removing noise in the segmented images. The extracted regions of two hands are teamed and recognized by ART2 algorithm that is robust for noise and damage. In the experiment by the proposed method and images of finger number from 1 to 10, we verified the proposed method recognize the numbers efficiently.

Artificial Neural Network for Quantitative Posture Classification in Thai Sign Language Translation System

  • Wasanapongpan, Kumphol;Chotikakamthorn, Nopporn
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1319-1323
    • /
    • 2004
  • In this paper, a problem of Thai sign language recognition using a neural network is considered. The paper addresses the problem in classifying certain signs conveying quantitative meaning, e.g., large or small. By treating those signs corresponding to different quantities as derived from different classes, the recognition error rate of the standard multi-layer Perceptron increases if the precision in recognizing different quantities is increased. This is due the fact that, to increase the quantitative recognition precision of those signs, the number of (increasingly similar) classes must also be increased. This leads to an increase in false classification. The problem is due to misinterpreting the amount of quantity the quantitative signs convey. In this paper, instead of treating those signs conveying quantitative attribute of the same quantity type (such as 'size' or 'amount') as derived from different classes, here they are considered instances of the same class. Those signs of the same quantity type are then further divided into different subclasses according to the level of quantity each sign is associated with. By using this two-level classification, false classification among main gesture classes is made independent to the level of precision needed in recognizing different quantitative levels. Moreover, precision of quantitative level classification can be made higher during the recognition phase, as compared to that used in the training phase. A standard multi-layer Perceptron with a back propagation learning algorithm was adapted in the study to implement this two-level classification of quantitative gesture signs. Experimental results obtained using an electronic glove measurement of hand postures are included.

  • PDF

A Recognition of Traffic Safety Signs Using Japanese Puzzle (Japanese Puzzle을 이용한 교통안전 표지판 인식)

  • Sohn, Young-Sun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.3
    • /
    • pp.416-421
    • /
    • 2008
  • This paper realizes a system that recognizes traffic safety signs by applying the principle used for game in reverse. The game used for this paper is one that expresses the shape of temporary objects intended by the maker when the maker sees the numerical image provided on (x, y) coordinates and then expresses it on the mesh. After separating the traffic safety sign image from the input image, the system is realized by outputting the content of the sign into letters by recognizing the forms and colors constituting the sign using the puzzle game above. Our system has fast process time and better rate of recognition than the existing system with black-and-white image processing and recognition without any penciling progress.

Traffic Sign Area Detection System Based on Color Processing Mechanism of Human (인간의 색상처리방식에 기반한 교통 표지판 영역 추출 시스템)

  • Cheoi, Kyung-Joo;Park, Min-Chul
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.63-72
    • /
    • 2007
  • The traffic sign on the road should be easy to distinguishable even from far, and should be recognized in a short time. As traffic sign is a very important object which provides important information for the drivers to enhance safety, it has to attract human's attention among any other objects on the road. This paper proposes a new method of detecting the area of traffic sign, which uses attention module on the assumption that we attention our gaze on the traffic sign at first among other objects when we drive a car. In this paper, we analyze the previous studies of psycophysical and physiological results to get what kind of features are used in the process of human's object recognition, especially color processing, and with these results we detected the area of traffic sign. Various kinds of traffic sign images were tested, and the results showed good quality(average 97.8% success).

Design and Implementation of Data Acquisition and Storage Systems for Multi-view Points Sign Language (다시점 수어 데이터 획득 및 저장 시스템 설계 및 구현)

  • Kim, Geunmo;Kim, Bongjae
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.3
    • /
    • pp.63-68
    • /
    • 2022
  • There are 395,789 people with hearing impairment in Korea, according to the 2021 Disability Statistics Annual Report by the Korea Institute for the Development of Disabled Persons. These people are experiencing a lot of inconvenience through hearing impairment, and many studies related to recognition and translation of Korean sign language are being conducted to solve this problem. In sign language recognition and translation research, collecting sign language data has many difficulties because few people use sign language professionally. In addition, most of the existed data is sign language data taken from the front of the speaker. To solve this problem, in this paper, we designed and developed a storage system that can collect sign language data based on multi-view points in real-time, rather than a single point, and store and manage it with high usability.

On-line dynamic hand gesture recognition system for the korean sign language (KSL) (한글 수화용 동적 손 제스처의 실시간 인식 시스템의 구현에 관한 연구)

  • Kim, Jong-Sung;Lee, Chan-Su;Jang, Won;Bien, Zeungnam
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.2
    • /
    • pp.61-70
    • /
    • 1997
  • Human-hand gestures have been used a means of communication among people for a long time, being interpreted as streams of tokens for a language. The signed language is a method of communication for hearing impaired person. Articulated gestures and postures of hands and fingers are commonly used for the signed language. This paper presents a system which recognizes the korean sign language (KSL) and translates the recognition results into a normal korean text and sound. A pair of data-gloves are used a sthe sensing device for detecting motions of hands and fingers. In this paper, we propose a dynamic gesture recognition mehtod by employing a fuzzy feature analysis method for efficient classification of hand motions, and applying a fuzzy min-max neural network to on-line pattern recognition.

  • PDF

An Illumination Invariant Traffic Sign Recognition in the Driving Environment for Intelligence Vehicles (지능형 자동차를 위한 조명 변화에 강인한 도로표지판 검출 및 인식)

  • Lee, Taewoo;Lim, Kwangyong;Bae, Guntae;Byun, Hyeran;Choi, Yeongwoo
    • Journal of KIISE
    • /
    • v.42 no.2
    • /
    • pp.203-212
    • /
    • 2015
  • This paper proposes a traffic sign recognition method in real road environments. The video stream in driving environments has two different characteristics compared to a general object video stream. First, the number of traffic sign types is limited and their shapes are mostly simple. Second, the camera cannot take clear pictures in the road scenes since there are many illumination changes and weather conditions are continuously changing. In this paper, we improve a modified census transform(MCT) to extract features effectively from the road scenes that have many illumination changes. The extracted features are collected by histograms and are transformed by the dense descriptors into very high dimensional vectors. Then, the high dimensional descriptors are encoded into a low dimensional feature vector by Fisher-vector coding and Gaussian Mixture Model. The proposed method shows illumination invariant detection and recognition, and the performance is sufficient to detect and recognize traffic signs in real-time with high accuracy.

Two-Stage Neural Networks for Sign Language Pattern Recognition (수화 패턴 인식을 위한 2단계 신경망 모델)

  • Kim, Ho-Joon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.3
    • /
    • pp.319-327
    • /
    • 2012
  • In this paper, we present a sign language recognition model which does not use any wearable devices for object tracking. The system design issues and implementation issues such as data representation, feature extraction and pattern classification methods are discussed. The proposed data representation method for sign language patterns is robust for spatio-temporal variances of feature points. We present a feature extraction technique which can improve the computation speed by reducing the amount of feature data. A neural network model which is capable of incremental learning is described and the behaviors and learning algorithm of the model are introduced. We have defined a measure which reflects the relevance between the feature values and the pattern classes. The measure makes it possible to select more effective features without any degradation of performance. Through the experiments using six types of sign language patterns, the proposed model is evaluated empirically.