• Title/Summary/Keyword: Sign Recognition

Search Result 258, Processing Time 0.024 seconds

Sign Language recognition Using Sequential Ram-based Cumulative Neural Networks (순차 램 기반 누적 신경망을 이용한 수화 인식)

  • Lee, Dong-Hyung;Kang, Man-Mo;Kim, Young-Kee;Lee, Soo-Dong
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.5
    • /
    • pp.205-211
    • /
    • 2009
  • The Weightless Neural Network(WNN) has the advantage of the processing speed, less computability than weighted neural network which readjusts the weight. Especially, The behavior information such as sequential gesture has many serial correlation. So, It is required the high computability and processing time to recognize. To solve these problem, Many algorithms used that added preprocessing and hardware interface device to reduce the computability and speed. In this paper, we proposed the Ram based Sequential Cumulative Neural Network(SCNN) model which is sign language recognition system without preprocessing and hardware interface. We experimented with using compound words in continuous korean sign language which was input binary image with edge detection from camera. The recognition system of sign language without preprocessing got 93% recognition rate.

  • PDF

Development of Sign Language Translation System using Motion Recognition of Kinect (키넥트의 모션 인식 기능을 이용한 수화번역 시스템 개발)

  • Lee, Hyun-Suk;Kim, Seung-Pil;Chung, Wan-Young
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.4
    • /
    • pp.235-242
    • /
    • 2013
  • In this paper, the system which can translate sign language through motion recognition of Kinect camera system is developed for the communication between hearing-impaired person or language disability, and normal person. The proposed algorithm which can translate sign language is developed by using core function of Kinect, and two ways such as length normalization and elbow normalization are introduced to improve accuracy of translating sign langauge for various sign language users. After that the sign language data is compared by chart in order to know how effective these ways of normalization. The accuracy of this program is demonstrated by entering 10 databases and translating sign languages ranging from simple signs to complex signs. In addition, the reliability of translating sign language is improved by applying this program to people who have various body shapes and fixing measure errors in body shapes.

Fast Convergence GRU Model for Sign Language Recognition

  • Subramanian, Barathi;Olimov, Bekhzod;Kim, Jeonghong
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.9
    • /
    • pp.1257-1265
    • /
    • 2022
  • Recognition of sign language is challenging due to the occlusion of hands, accuracy of hand gestures, and high computational costs. In recent years, deep learning techniques have made significant advances in this field. Although these methods are larger and more complex, they cannot manage long-term sequential data and lack the ability to capture useful information through efficient information processing with faster convergence. In order to overcome these challenges, we propose a word-level sign language recognition (SLR) system that combines a real-time human pose detection library with the minimized version of the gated recurrent unit (GRU) model. Each gate unit is optimized by discarding the depth-weighted reset gate in GRU cells and considering only current input. Furthermore, we use sigmoid rather than hyperbolic tangent activation in standard GRUs due to performance loss associated with the former in deeper networks. Experimental results demonstrate that our pose-based optimized GRU (Pose-OGRU) outperforms the standard GRU model in terms of prediction accuracy, convergency, and information processing capability.

Research on Methods to Increase Recognition Rate of Korean Sign Language using Deep Learning

  • So-Young Kwon;Yong-Hwan Lee
    • Journal of Platform Technology
    • /
    • v.12 no.1
    • /
    • pp.3-11
    • /
    • 2024
  • Deaf people who use sign language as their first language sometimes have difficulty communicating because they do not know spoken Korean. Deaf people are also members of society, so we must support to create a society where everyone can live together. In this paper, we present a method to increase the recognition rate of Korean sign language using a CNN model. When the original image was used as input to the CNN model, the accuracy was 0.96, and when the image corresponding to the skin area in the YCbCr color space was used as input, the accuracy was 0.72. It was confirmed that inserting the original image itself would lead to better results. In other studies, the accuracy of the combined Conv1d and LSTM model was 0.92, and the accuracy of the AlexNet model was 0.92. The CNN model proposed in this paper is 0.96 and is proven to be helpful in recognizing Korean sign language.

  • PDF

Real-time Sign Object Detection in Subway station using Rotation-invariant Zernike Moment (회전 불변 제르니케 모멘트를 이용한 실시간 지하철 기호 객체 검출)

  • Weon, Sun-Hee;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of Digital Contents Society
    • /
    • v.12 no.3
    • /
    • pp.279-289
    • /
    • 2011
  • The latest hardware and software techniques are combined to give safe walking guidance and convenient service of realtime walking assistance system for visually impaired person. This system consists of obstacle detection and perception, place recognition, and sign recognition for pedestrian can safely walking to arrive at their destination. In this paper, we exploit the sign object detection system in subway station for sign recognition that one of the important factors of walking assistance system. This paper suggest the adaptive feature map that can be robustly extract the sign object region from complexed environment with light and noise. And recognize a sign using fast zernike moment features which is invariant under translation, rotation and scale of object during walking. We considered three types of signs as arrow, restroom, and exit number and perform the training and recognizing steps through adaboost classifier. The experimental results prove that our method can be suitable and stable for real-time system through yields on the average 87.16% stable detection rate and 20 frame/sec of operation time for three types of signs in 5000 images of sign database.

Automatic Recognition of Direction Information in Road Sign Image Using OpenCV (OpenCV를 이용한 도로표지 영상에서의 방향정보 자동인식)

  • Kim, Gihong;Chong, Kyusoo;Youn, Junhee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.4
    • /
    • pp.293-300
    • /
    • 2013
  • Road signs are important infrastructures for safe and smooth traffic by providing useful information to drivers. It is necessary to establish road sign DB for managing road signs systematically. To provide such DB, manually detection and recognition from imagery can be done. However, it is time and cost consuming. In this study, we proposed algorithms for automatic recognition of direction information in road sign image. Also we developed algorithm code using OpenCV library, and applied it to road sign image. To automatically detect and recognize direction information, we developed program which is composed of various modules such as image enhancement, image binarization, arrow region extraction, interesting point extraction, and template image matching. As a result, we can confirm the possibility of automatic recognition of direction information in road sign image.

A Pattern Recognition Receptor, SIGN-R1, Mediates ROS Generation against Polysaccharide Dextran, Resulting in Increase of Peroxiredoxin-1 and Its Interaction to SIGN-R1

  • Choi, Heong-Jwa;Choi, Woo-Sung;Park, Jin-Yeon;Kang, Kyeong-Hyeon;Prabagar, Miglena G.;Shin, Chan-Young;Kang, Young-Sun
    • Biomolecules & Therapeutics
    • /
    • v.18 no.3
    • /
    • pp.271-279
    • /
    • 2010
  • Streptococcus pneumoniae is the major pathogen that frequently causes serious infections in children, the elderly and immunocompromised patients. S. pneumoniae is known to produce reactive oxygen species (ROS) and S. pneumoniae-produced ROS is considered to play a role in pneumococci pathogenesis. SIGN-R1 is the principal receptor of capsular polysaccharides (CPSs) of S. pneumoniae. However, there is a considerable lack of knowledge about the protective role of SIGN-R1 against S. pneumoniae-produced ROS in SIGN-$R1^+$ macrophages. While investigating the protective role of SIGN-R1 against ROS, we found that SIGN-R1 intimately bound to peroxiredoxin-1 (Prx-1), one of small antioxidant proteins in vitro and in vivo. This interaction was increased with ROS generation which was produced by stimulating SIGN-R1 with dextran, a polysaccharide ligand of SIGN-R1. Also, SIGN-R1 crosslinking with 22D1 anti-SIGN-R1 antibody increased Prx-1 in vitro or in vivo. These results suggested that SIGN-R1 stimulation with CPSs of S. pneumoniae increase the expression level of Prx-1 through ROS and its subsequent interaction to SIGN-R1, providing an important antioxidant role for the host protection against S. pneumoniae.

Improvement of Korean Sign Language Recognition System by User Adaptation (사용자 적응을 통한 한국 수화 인식 시스템의 개선)

  • Jung, Seong-Hoon;Park, Kwang-Hyun;Bien, Zeung-Nam
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.301-303
    • /
    • 2007
  • This paper presents user adaptation methods to overcome limitations of a user-independent model and a user-dependent model in a Korean sign language recognition system. To adapt model parameters for unobserved states in hidden Markov models, we introduce new methods based on motion similarity and prediction from adaptation history so that we can achieve faster adaption and higher recognition rates comparing with previous methods.

  • PDF

Improved Environment Recognition Algorithms for Autonomous Vehicle Control (자율주행 제어를 위한 향상된 주변환경 인식 알고리즘)

  • Bae, Inhwan;Kim, Yeounghoo;Kim, Taekyung;Oh, Minho;Ju, Hyunsu;Kim, Seulki;Shin, Gwanjun;Yoon, Sunjae;Lee, Chaejin;Lim, Yongseob;Choi, Gyeungho
    • Journal of Auto-vehicle Safety Association
    • /
    • v.11 no.2
    • /
    • pp.35-43
    • /
    • 2019
  • This paper describes the improved environment recognition algorithms using some type of sensors like LiDAR and cameras. Additionally, integrated control algorithm for an autonomous vehicle is included. The integrated algorithm was based on C++ environment and supported the stability of the whole driving control algorithms. As to the improved vision algorithms, lane tracing and traffic sign recognition were mainly operated with three cameras. There are two algorithms developed for lane tracing, Improved Lane Tracing (ILT) and Histogram Extension (HIX). Two independent algorithms were combined into one algorithm - Enhanced Lane Tracing with Histogram Extension (ELIX). As for the enhanced traffic sign recognition algorithm, integrated Mutual Validation Procedure (MVP) by using three algorithms - Cascade, Reinforced DSIFT SVM and YOLO was developed. Comparing to the results for those, it is convincing that the precision of traffic sign recognition is substantially increased. With the LiDAR sensor, static and dynamic obstacle detection and obstacle avoidance algorithms were focused. Therefore, improved environment recognition algorithms, which are higher accuracy and faster processing speed than ones of the previous algorithms, were proposed. Moreover, by optimizing with integrated control algorithm, the memory issue of irregular system shutdown was prevented. Therefore, the maneuvering stability of the autonomous vehicle in severe environment were enhanced.

A Structure and Framework for Sign Language Interaction

  • Kim, Soyoung;Pan, Younghwan
    • Journal of the Ergonomics Society of Korea
    • /
    • v.34 no.5
    • /
    • pp.411-426
    • /
    • 2015
  • Objective: The goal of this thesis is to design the interaction structure and framework of system to recognize sign language. Background: The sign language of meaningful individual gestures is combined to construct a sentence, so it is difficult to interpret and recognize the meaning of hand gesture for system, because of the sequence of continuous gestures. This being so, in order to interpret the meaning of individual gesture correctly, the interaction structure and framework are needed so that they can segment the indication of individual gesture. Method: We analyze 700 sign language words to structuralize the sign language gesture interaction. First of all, we analyze the transformational patterns of the hand gesture. Second, we analyze the movement of the transformational patterns of the hand gesture. Third, we analyze the type of other gestures except hands. Based on this, we design a framework for sign language interaction. Results: We elicited 8 patterns of hand gesture on the basis of the fact on whether the gesture has a change from starting point to ending point. And then, we analyzed the hand movement based on 3 elements: patterns of movement, direction, and whether hand movement is repeating or not. Moreover, we defined 11 movements of other gestures except hands and classified 8 types of interaction. The framework for sign language interaction, which was designed based on this mentioned above, applies to more than 700 individual gestures of the sign language, and can be classified as an individual gesture in spite of situation which has continuous gestures. Conclusion: This study has structuralized in 3 aspects defined to analyze the transformational patterns of the starting point and the ending point of hand shape, hand movement, and other gestures except hands for sign language interaction. Based on this, we designed the framework that can recognize the individual gestures and interpret the meaning more accurately, when meaningful individual gesture is input sequence of continuous gestures. Application: When we develop the system of sign language recognition, we can apply interaction framework to it. Structuralized gesture can be used for using database of sign language, inventing an automatic recognition system, and studying on the action gestures in other areas.