• Title/Summary/Keyword: Sign Recognition

Search Result 258, Processing Time 0.022 seconds

Design and Implementation of Data Acquisition and Storage Systems for Multi-view Points Sign Language (다시점 수어 데이터 획득 및 저장 시스템 설계 및 구현)

  • Kim, Geunmo;Kim, Bongjae
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.3
    • /
    • pp.63-68
    • /
    • 2022
  • There are 395,789 people with hearing impairment in Korea, according to the 2021 Disability Statistics Annual Report by the Korea Institute for the Development of Disabled Persons. These people are experiencing a lot of inconvenience through hearing impairment, and many studies related to recognition and translation of Korean sign language are being conducted to solve this problem. In sign language recognition and translation research, collecting sign language data has many difficulties because few people use sign language professionally. In addition, most of the existed data is sign language data taken from the front of the speaker. To solve this problem, in this paper, we designed and developed a storage system that can collect sign language data based on multi-view points in real-time, rather than a single point, and store and manage it with high usability.

Efficient Sign Language Recognition and Classification Using African Buffalo Optimization Using Support Vector Machine System

  • Karthikeyan M. P.;Vu Cao Lam;Dac-Nhuong Le
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.6
    • /
    • pp.8-16
    • /
    • 2024
  • Communication with the deaf has always been crucial. Deaf and hard-of-hearing persons can now express their thoughts and opinions to teachers through sign language, which has become a universal language and a very effective tool. This helps to improve their education. This facilitates and simplifies the referral procedure between them and the teachers. There are various bodily movements used in sign language, including those of arms, legs, and face. Pure expressiveness, proximity, and shared interests are examples of nonverbal physical communication that is distinct from gestures that convey a particular message. The meanings of gestures vary depending on your social or cultural background and are quite unique. Sign language prediction recognition is a highly popular and Research is ongoing in this area, and the SVM has shown value. Research in a number of fields where SVMs struggle has encouraged the development of numerous applications, such as SVM for enormous data sets, SVM for multi-classification, and SVM for unbalanced data sets.Without a precise diagnosis of the signs, right control measures cannot be applied when they are needed. One of the methods that is frequently utilized for the identification and categorization of sign languages is image processing. African Buffalo Optimization using Support Vector Machine (ABO+SVM) classification technology is used in this work to help identify and categorize peoples' sign languages. Segmentation by K-means clustering is used to first identify the sign region, after which color and texture features are extracted. The accuracy, sensitivity, Precision, specificity, and F1-score of the proposed system African Buffalo Optimization using Support Vector Machine (ABOSVM) are validated against the existing classifiers SVM, CNN, and PSO+ANN.

On-line dynamic hand gesture recognition system for the korean sign language (KSL) (한글 수화용 동적 손 제스처의 실시간 인식 시스템의 구현에 관한 연구)

  • Kim, Jong-Sung;Lee, Chan-Su;Jang, Won;Bien, Zeungnam
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.2
    • /
    • pp.61-70
    • /
    • 1997
  • Human-hand gestures have been used a means of communication among people for a long time, being interpreted as streams of tokens for a language. The signed language is a method of communication for hearing impaired person. Articulated gestures and postures of hands and fingers are commonly used for the signed language. This paper presents a system which recognizes the korean sign language (KSL) and translates the recognition results into a normal korean text and sound. A pair of data-gloves are used a sthe sensing device for detecting motions of hands and fingers. In this paper, we propose a dynamic gesture recognition mehtod by employing a fuzzy feature analysis method for efficient classification of hand motions, and applying a fuzzy min-max neural network to on-line pattern recognition.

  • PDF

An Illumination Invariant Traffic Sign Recognition in the Driving Environment for Intelligence Vehicles (지능형 자동차를 위한 조명 변화에 강인한 도로표지판 검출 및 인식)

  • Lee, Taewoo;Lim, Kwangyong;Bae, Guntae;Byun, Hyeran;Choi, Yeongwoo
    • Journal of KIISE
    • /
    • v.42 no.2
    • /
    • pp.203-212
    • /
    • 2015
  • This paper proposes a traffic sign recognition method in real road environments. The video stream in driving environments has two different characteristics compared to a general object video stream. First, the number of traffic sign types is limited and their shapes are mostly simple. Second, the camera cannot take clear pictures in the road scenes since there are many illumination changes and weather conditions are continuously changing. In this paper, we improve a modified census transform(MCT) to extract features effectively from the road scenes that have many illumination changes. The extracted features are collected by histograms and are transformed by the dense descriptors into very high dimensional vectors. Then, the high dimensional descriptors are encoded into a low dimensional feature vector by Fisher-vector coding and Gaussian Mixture Model. The proposed method shows illumination invariant detection and recognition, and the performance is sufficient to detect and recognize traffic signs in real-time with high accuracy.

Two-Stage Neural Networks for Sign Language Pattern Recognition (수화 패턴 인식을 위한 2단계 신경망 모델)

  • Kim, Ho-Joon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.3
    • /
    • pp.319-327
    • /
    • 2012
  • In this paper, we present a sign language recognition model which does not use any wearable devices for object tracking. The system design issues and implementation issues such as data representation, feature extraction and pattern classification methods are discussed. The proposed data representation method for sign language patterns is robust for spatio-temporal variances of feature points. We present a feature extraction technique which can improve the computation speed by reducing the amount of feature data. A neural network model which is capable of incremental learning is described and the behaviors and learning algorithm of the model are introduced. We have defined a measure which reflects the relevance between the feature values and the pattern classes. The measure makes it possible to select more effective features without any degradation of performance. Through the experiments using six types of sign language patterns, the proposed model is evaluated empirically.

A Speech Representation and Recognition Method using Sign Patterns (부호패턴에 의한 음성표현과 인식방법)

  • Kim Young Hwa;Kim Un Il;Lee Hee Jeong;Park Byung Chul
    • The Journal of the Acoustical Society of Korea
    • /
    • v.8 no.5
    • /
    • pp.86-94
    • /
    • 1989
  • In this paper the method using a sign pattern( +,- ) of Mel-cepstrum coefficients as a new speech representation is proposed. Relatively stable patterns can be obtained for speech signals which has strong stationarity like vowels and nasals, and the phonemic difference according to the individuality of speakers can be absorbed without affecting characteristics of the phoneme. In this paper we show that the reduction of recognition procedure of phonemes and training procedure of phoneme models can be achieved through the representation of Korean phonemes using such a sign pattern.

  • PDF

Real Time Recognition of Finger-Language Using Color Information and Fuzzy Clustering Algorithm

  • Kim, Kwang-Baek;Song, Doo-Heon;Woo, Young-Woon
    • Journal of information and communication convergence engineering
    • /
    • v.8 no.1
    • /
    • pp.19-22
    • /
    • 2010
  • A finger language helping hearing impaired people in communication A sign language helping hearing impaired people in communication is not popular to ordinary healthy people. In this paper, we propose a method for real-time sign language recognition from a vision system using color information and fuzzy clustering system. We use YCbCr color model and canny mask to decide the position of hands and the boundary lines. After extracting regions of two hands by applying 8-directional contour tracking algorithm and morphological information, the system uses FCM in classifying sign language signals. In experiment, the proposed method is proven to be sufficiently efficient.

Numeric Sign Language Interpreting Algorithm Based on Hand Image Processing (영상처리 기반 숫자 수화표현 인식 알고리즘)

  • Gwon, Kyungpil;Yoo, Joonhyuk
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.14 no.3
    • /
    • pp.133-142
    • /
    • 2019
  • The existing auxiliary communicating aids for the hearing-impaired have an inconvenience of using additional expensive sensing devices. This paper presents a hand image detection based algorithm to interpret the sign language of the hearing-impaired. The proposed sign language recognition system exploits the hand image only captured by the camera without using any additional gloves with extra sensors. Based on the hand image processing, the system can perfectly classify several numeric sign language representations. This work proposes a simple lightweight classification algorithm to identify the hand image of the hearing-impaired to communicate with others even further in an environment of complex background. Experimental results show that the proposed system can interpret the numeric sign language quite well with an accuracy of 95.6% on average.

A Study of Effective Method to Update the Database for Road Traffic Facilities Using Digital Image Processing and Pattern Recognition (수치영상처리 및 패턴 인식에 의한 도로교통시설물 DB의 효율적 갱신방안 연구)

  • Choi, Joon-Seog;Kang, Joon-Mook
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.20 no.2
    • /
    • pp.31-37
    • /
    • 2012
  • Because of road construction and expansion, Update of the road traffic facilities DB is steadily increased each year, and, Increasing drivers and cars, safety signs for traffic safety are required management and additional installation continuously. To update Safety Sign database promptly, we have developed auto recognition function of safety sign, and analyzed coordinates accuracy. The purpose of this study was to propose methods to update about road traffic facilities efficiently. For this purpose, omni-directional camera was calibrated for acquisition of 3-dimensional coordinates, integrated GPS/IMU/DMI system and applied image processing. In this experiment, we proposed a effective method to update database of road traffic facilities for digital map.

A study on hand gesture recognition using 3D hand feature (3차원 손 특징을 이용한 손 동작 인식에 관한 연구)

  • Bae Cheol-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.4
    • /
    • pp.674-679
    • /
    • 2006
  • In this paper a gesture recognition system using 3D feature data is described. The system relies on a novel 3D sensor that generates a dense range mage of the scene. The main novelty of the proposed system, with respect to other 3D gesture recognition techniques, is the capability for robust recognition of complex hand postures such as those encountered in sign language alphabets. This is achieved by explicitly employing 3D hand features. Moreover, the proposed approach does not rely on colour information, and guarantees robust segmentation of the hand under various illumination conditions, and content of the scene. Several novel 3D image analysis algorithms are presented covering the complete processing chain: 3D image acquisition, arm segmentation, hand -forearm segmentation, hand pose estimation, 3D feature extraction, and gesture classification. The proposed system is tested in an application scenario involving the recognition of sign-language postures.