• Title/Summary/Keyword: hand signal recognition

Search Result 55, Processing Time 0.027 seconds

Speech Activity Detection using Lip Movement Image Signals (입술 움직임 영상 선호를 이용한 음성 구간 검출)

  • Kim, Eung-Kyeu
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.4
    • /
    • pp.289-297
    • /
    • 2010
  • In this paper, A method to prevent the external acoustic noise from being misrecognized as the speech recognition object is presented in the speech activity detection process for the speech recognition. Also this paper confirmed besides the acoustic energy to the lip movement image signals. First of all, the successive images are obtained through the image camera for personal computer and the lip movement whether or not is discriminated. The next, the lip movement image signal data is stored in the shared memory and shares with the speech recognition process. In the mean time, the acoustic energy whether or not by the utterance of a speaker is verified by confirming data stored in the shared memory in the speech activity detection process which is the preprocess phase of the speech recognition. Finally, as a experimental result of linking the speech recognition processor and the image processor, it is confirmed to be normal progression to the output of the speech recognition result if face to the image camera and speak. On the other hand, it is confirmed not to the output the result of the speech recognition if does not face to the image camera and speak. Also, the initial feature values under off-line are replaced by them. Similarly, the initial template image captured while off-line is replaced with a template image captured under on-line, so the discrimination of the lip movement image tracking is raised. An image processing test bed was implemented to confirm the lip movement image tracking process visually and to analyze the related parameters on a real-time basis. As a result of linking the speech and image processing system, the interworking rate shows 99.3% in the various illumination environments.

Fault Diagnosis Method for Automatic Machine Using Artificial Neutral Network Based on DWT Power Spectral Density (인공신경망을 이용한 DWT 전력스펙트럼 밀도 기반 자동화 기계 고장 진단 기법)

  • Kang, Kyung-Won
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.20 no.2
    • /
    • pp.78-83
    • /
    • 2019
  • Sounds based machine fault diagnosis recovers all the studies that aim to detect automatically abnormal sound on machines using the acoustic emission by these machines. Conventional methods that use mathematical models have been found inaccurate because of the complexity of the industry machinery systems and the obvious existence of nonlinear factors such as noises. Therefore, any fault diagnosis issue can be treated as a pattern recognition problem. We propose here an automatic fault diagnosis method of hand drills using discrete wavelet transform(DWT) and pattern recognition techniques such as artificial neural networks(ANN). We first conduct a filtering analysis based on DWT. The power spectral density(PSD) is performed on the wavelet subband except for the highest and lowest low frequency subband. The PSD of the wavelet coefficients are extracted as our features for classifier based on ANN the pattern recognition part. The results show that the proposed method can be effectively used not only to detect defects but also to various automatic diagnosis system based on sound.

Hybrid HMM for Transitional Gesture Classification in Thai Sign Language Translation

  • Jaruwanawat, Arunee;Chotikakamthorn, Nopporn;Werapan, Worawit
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1106-1110
    • /
    • 2004
  • A human sign language is generally composed of both static and dynamic gestures. Each gesture is represented by a hand shape, its position, and hand movement (for a dynamic gesture). One of the problems found in automated sign language translation is on segmenting a hand movement that is part of a transitional movement from one hand gesture to another. This transitional gesture conveys no meaning, but serves as a connecting period between two consecutive gestures. Based on the observation that many dynamic gestures as appeared in Thai sign language dictionary are of quasi-periodic nature, a method was developed to differentiate between a (meaningful) dynamic gesture and a transitional movement. However, there are some meaningful dynamic gestures that are of non-periodic nature. Those gestures cannot be distinguished from a transitional movement by using the signal quasi-periodicity. This paper proposes a hybrid method using a combination of the periodicity-based gesture segmentation method with a HMM-based gesture classifier. The HMM classifier is used here to detect dynamic signs of non-periodic nature. Combined with the periodic-based gesture segmentation method, this hybrid scheme can be used to identify segments of a transitional movement. In addition, due to the use of quasi-periodic nature of many dynamic sign gestures, dimensionality of the HMM part of the proposed method is significantly reduced, resulting in computational saving as compared with a standard HMM-based method. Through experiment with real measurement, the proposed method's recognition performance is reported.

  • PDF

Combining Object Detection and Hand Gesture Recognition for Automatic Lighting System Control

  • Pham, Giao N.;Nguyen, Phong H.;Kwon, Ki-Ryong
    • Journal of Multimedia Information System
    • /
    • v.6 no.4
    • /
    • pp.329-332
    • /
    • 2019
  • Recently, smart lighting systems are the combination between sensors and lights. These systems turn on/off and adjust the brightness of lights based on the motion of object and the brightness of environment. These systems are often applied in places such as buildings, rooms, garages and parking lot. However, these lighting systems are controlled by lighting sensors, motion sensors based on illumination environment and motion detection. In this paper, we propose an automatic lighting control system using one single camera for buildings, rooms and garages. The proposed system is one integration the results of digital image processing as motion detection, hand gesture detection to control and dim the lighting system. The experimental results showed that the proposed system work very well and could consider to apply for automatic lighting spaces.

Emergency Traffic Hand Sign Recognition System for Autonomous Driving (자율주행 시대를 대비한 긴급 교통 수신호 인식 시스템)

  • Kwak, Young-Tae;Choi, Dae-Won;Song, Min-Ji
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.677-678
    • /
    • 2020
  • 본 연구는 자율주행 시대에 자동차의 외부통제를 가능하게 하는데 목적이 있다. 자율주행 자동차의 외부통제를 하기 위해 교통경찰 수신호를 사용한다. 교통이라는 특별한 상황을 고려하여 실시간 객체 검출이 가능한 YOLO모델을 사용하였고, 수신호 데이터 학습을 위해 Data Argumentation 기법을 사용하여 데이터를 확보한 후 이를 바탕으로 YOLO모델을 학습하였다. 학습된 YOLO모델을 이용하여 교통의 흐름에서 교통 통제자를 실시간으로 검출하였다. 이후 검출된 객체를 이용하여 객체 확인 알고리즘과 수신호 의미파악 알고리즘을 사용하여 수신호의 의미를 파악하고 이를 사용자에게 전달한다. 이와 같은 시스템을 통해 자율주행 자동차에 돌발 상황 발생 시 보다 정확하고 빠르게 교통의 흐름을 정상화 할 수 있는 장점이 있다.

  • PDF

Biosign Recognition based on the Soft Computing Techniques with application to a Rehab -type Robot

  • Lee, Ju-Jang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.29.2-29
    • /
    • 2001
  • For the design of human-centered systems in which a human and machine such as a robot form a human-in system, human-friendly interaction/interface is essential. Human-friendly interaction is possible when the system is capable of recognizing human biosigns such as5 EMG Signal, hand gesture and facial expressions so the some humanintention and/or emotion can be inferred and is used as a proper feedback signal. In the talk, we report our experiences of applying the Soft computing techniques including Fuzzy, ANN, GA and rho rough set theory for efficiently recognizing various biosigns and for effective inference. More specifically, we first observe characteristics of various forms of biosigns and propose a new way of extracting feature set for such signals. Then we show a standardized procedure of getting an inferred intention or emotion from the signals. Finally, we present examples of application for our model of rehabilitation robot named.

  • PDF

Sensor-based Recognition of Human's Hand Motion for Control of a Robotic Hand (로봇 핸드 제어를 위한 센서 기반 손 동작 인식)

  • Hwang, Myun Joong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.9
    • /
    • pp.5440-5445
    • /
    • 2014
  • Many studies have examined robot control using human bio signals but complicated signal processing and expensive hardware are necessary. This study proposes a method to recognize a human's hand motion using a low-cost EMG sensor and Flex sensor. The method to classify movement of the hand and finger is determined from the change in output voltage measured through MCU. The analog reference voltage is determined to be 3.3V to increase the resolution of movement identification through experiment. The robotic hand is designed to realize the identified movement. The hand has four fingers and a wrist that are controlled using pneumatic cylinders and a DC servo motor, respectively. The results show that the proposed simple method can realize human hand motion in a remote environment using the fabricated robotic hand.

EF Sensor-Based Hand Motion Detection and Automatic Frame Extraction (EF 센서기반 손동작 신호 감지 및 자동 프레임 추출)

  • Lee, Hummin;Jung, Sunil;Kim, Youngchul
    • Smart Media Journal
    • /
    • v.9 no.4
    • /
    • pp.102-108
    • /
    • 2020
  • In this paper, we propose a real-time method of detecting hand motions and extracting the signal frame induced by EF(Electric Field) sensors. The signal induced by hand motion includes not only noises caused by various environmental sources as well as sensor's physical placement, but also different initial off-set conditions. Thus, it has been considered as a challenging problem to detect the motion signal and extract the motion frame automatically in real-time. In this study, we remove the PLN(Power Line Noise) using LPF with 10Hz cut-off and successively apply MA(Moving Average) filter to obtain clean and smooth input motion signals. To sense a hand motion, we use two thresholds(positive and negative thresholds) with offset value to detect a starting as well as an ending moment of the motion. Using this approach, we can achieve the correct motion detection rate over 98%. Once the final motion frame is determined, the motion signals are normalized to be used in next process of classification or recognition stage such as LSTN deep neural networks. Our experiment and analysis show that our proposed methods produce better than 98% performance in correct motion detection rate as well as in frame-matching rate.

Speech Activity Decision with Lip Movement Image Signals (입술움직임 영상신호를 고려한 음성존재 검출)

  • Park, Jun;Lee, Young-Jik;Kim, Eung-Kyeu;Lee, Soo-Jong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.1
    • /
    • pp.25-31
    • /
    • 2007
  • This paper describes an attempt to prevent the external acoustic noise from being misrecognized as the speech recognition target. For this, in the speech activity detection process for the speech recognition, it confirmed besides the acoustic energy to the lip movement image signal of a speaker. First of all, the successive images are obtained through the image camera for PC. The lip movement whether or not is discriminated. And the lip movement image signal data is stored in the shared memory and shares with the recognition process. In the meantime, in the speech activity detection Process which is the preprocess phase of the speech recognition. by conforming data stored in the shared memory the acoustic energy whether or not by the speech of a speaker is verified. The speech recognition processor and the image processor were connected and was experimented successfully. Then, it confirmed to be normal progression to the output of the speech recognition result if faced the image camera and spoke. On the other hand. it confirmed not to output of the speech recognition result if did not face the image camera and spoke. That is, if the lip movement image is not identified although the acoustic energy is inputted. it regards as the acoustic noise.

A Wavelet-Based EMG Pattern Recognition with Nonlinear Feature Projection (비선형 특징투영 기법을 이용한 웨이블렛 기반 근전도 패턴인식)

  • Chu Jun-Uk;Moon Inhyuk
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.42 no.2 s.302
    • /
    • pp.39-48
    • /
    • 2005
  • This paper proposes a novel approach to recognize nine kinds of motion for a multifunction myoelectric hand, acquiring four channel EMG signals from electrodes placed on the forearm. To analyze EMG with properties of nonstationary signal, time-frequency features are extracted by wavelet packet transform. For dimensionality reduction and nonlinear mapping of the features, we also propose a feature projection composed of PCA and SOFM. The dimensionality reduction by PCA simplifies the structure of the classifier, and reduces processing time for the pattern recognition. The nonlinear mapping by SOFM transforms the PCA-reduced features to a new feature space with high class separability. Finally a multilayer neural network is employed as the pattern classifier. From experimental results, we show that the proposed method enhances the recognition accuracy, and makes it possible to implement a real-time pattern recognition.