• Title/Summary/Keyword: Human motion detection

Search Result 146, Processing Time 0.024 seconds

Integrated 3D Skin Color Model for Robust Skin Color Detection of Various Races (강건한 다인종 얼굴 검출을 위한 통합 3D 피부색 모델)

  • Park, Gyeong-Mi;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.5
    • /
    • pp.1-12
    • /
    • 2009
  • The correct detection of skin color is an important preliminary process in fields of face detection and human motion analysis. It is generally performed by three steps: transforming the pixel color to a non-RGB color space, dropping the illuminance component of skin color, and classifying the pixels by the skin color distribution model. Skin detection depends on by various factors such as color space, presence of the illumination, skin modeling method. In this paper we propose a 3d skin color model that can segment pixels with several ethnic skin color from images with various illumination condition and complicated backgrounds. This proposed skin color model are formed with each components(Y, Cb, Cr) which transform pixel color to YCbCr color space. In order to segment the skin color of several ethnic groups together, we first create the skin color model of each ethnic group, and then merge the skin color model using its skin color probability. Further, proposed model makes several steps of skin color areas that can help to classify proper skin color areas using small training data.

Active Facial Tracking for Fatigue Detection (피로 검출을 위한 능동적 얼굴 추적)

  • Kim, Tae-Woo;Kang, Yong-Seok
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.3
    • /
    • pp.53-60
    • /
    • 2009
  • The vision-based driver fatigue detection is one of the most prospective commercial applications of facial expression recognition technology. The facial feature tracking is the primary technique issue in it. Current facial tracking technology faces three challenges: (1) detection failure of some or all of features due to a variety of lighting conditions and head motions; (2) multiple and non-rigid object tracking; and (3) features occlusion when the head is in oblique angles. In this paper, we propose a new active approach. First, the active IR sensor is used to robustly detect pupils under variable lighting conditions. The detected pupils are then used to predict the head motion. Furthermore, face movement is assumed to be locally smooth so that a facial feature can be tracked with a Kalman filter. The simultaneous use of the pupil constraint and the Kalman filtering greatly increases the prediction accuracy for each feature position. Feature detection is accomplished in the Gabor space with respect to the vicinity of predicted location. Local graphs consisting of identified features are extracted and used to capture the spatial relationship among detected features. Finally, a graph-based reliability propagation is proposed to tackle the occlusion problem and verify the tracking results. The experimental results show validity of our active approach to real-life facial tracking under variable lighting conditions, head orientations, and facial expressions.

  • PDF

Active Facial Tracking for Fatigue Detection (피로 검출을 위한 능동적 얼굴 추적)

  • 박호식;정연숙;손동주;나상동;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05b
    • /
    • pp.603-607
    • /
    • 2004
  • The vision-based driver fatigue detection is one of the most prospective commercial applications of facial expression recognition technology. The facial feature tracking is the primary technique issue in it. Current facial tracking technology faces three challenges: (1) detection failure of some or all of features due to a variety of lighting conditions and head motions; (2) multiple and non-rigid object tracking and (3) features occlusion when the head is in oblique angles. In this paper, we propose a new active approach. First, the active IR sensor is used to robustly detect pupils under variable lighting conditions. The detected pupils are then used to predict the head motion. Furthermore, face movement is assumed to be locally smooth so that a facial feature can be tracked with a Kalman filter. The simultaneous use of the pupil constraint and the Kalman filtering greatly increases the prediction accuracy for each feature position. Feature detection is accomplished in the Gabor space with respect to the vicinity of predicted location. Local graphs consisting of identified features are extracted and used to capture the spatial relationship among detected features. Finally, a graph-based reliability propagation is proposed to tackle the occlusion problem and verify the tracking results. The experimental results show validity of our active approach to real-life facial tracking under variable lighting conditions, head orientations, and facial expressions.

  • PDF

Moving Object Classification through Fusion of Shape and Motion Information (형상 정보와 모션 정보 융합을 통한 움직이는 물체 인식)

  • Kim Jung-Ho;Ko Han-Seok
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.5 s.311
    • /
    • pp.38-47
    • /
    • 2006
  • Conventional classification method uses a single classifier based on shape or motion feature. However this method exhibits a weakness if naively used since the classification performance is highly sensitive to the accuracy of moving region to be detected. The detection accuracy, in turn, depends on the condition of the image background. In this paper, we propose to resolve the drawback and thus strengthen the classification reliability by employing a Bayesian decision fusion and by optimally combining the decisions of three classifiers. The first classifier is based on shape information obtained from Fourier descriptors while the second is based on the shape information obtained from image gradients. The third classifier uses motion information. Our experimental results on the classification Performance of human and vehicle with a static camera in various directions confirm a significant improvement and indicate the superiority of the proposed decision fusion method compared to the conventional Majority Voting and Weight Average Score approaches.

Development of an IoB-Based HW/SW Platform for Human Motion Detection and Heart Rate Measurement (IoB 기반의 인체 모션 감지 및 심박수 측정을 위한 HW/SW 플랫폼 개발)

  • Cha, Eunyoung;Seol, Kwon;Lee, Jong Hyun;Kim, Gyeol;Ahn, Haesung;Kwon, Hyuk In;Kim, Hyeongseok;Kim, Jeongchang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.11a
    • /
    • pp.172-174
    • /
    • 2019
  • 본 논문에서는 사용자가 자신의 움직임 및 심장 박동 상태를 모니터링 하기 위한 생체 인터넷 (Internet of Biometry: IoB) 기반의 HW/SW (hardware/software) 플랫폼 (platform)을 제안한다. 제안하는 시스템은 모션 센서 (motion sensor) 또는 심박 (heart rate) 센서와 같이 사용자의 생체 정보를 수집할 수 있는 센서를 사용한다. 또한, 마이크로프로세서 (microprocessor)를 사용하여 센서로부터 수집된 데이터를 사용자에게 필요한 생체 정보로 변환하고, 블루투스 (Bluetooth) 통신을 이용하여 사용자의 스마트폰 앱 (smartphone application)으로 변환한 생체 정보를 전달한다. 스마트폰 앱은 수신한 생체 정보를 디스플레이 (display)함으로써, 사용자가 자신의 상태를 모니터링 (monitoring) 할 수 있다. 제안한 시스템을 사용하여 해양 레포츠 (leisure sports) 등과 같은 활동을 하는 사람들이 자신의 몸 상태를 스스로 확인할 수 있고, 사고 예방의 효과를 얻을 수 있다.

  • PDF

Development of a Cost-Effective Tele-Robot System Delivering Speaker's Affirmative and Negative Intentions (화자의 긍정·부정 의도를 전달하는 실용적 텔레프레즌스 로봇 시스템의 개발)

  • Jin, Yong-Kyu;You, Su-Jeong;Cho, Hye-Kyung
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.3
    • /
    • pp.171-177
    • /
    • 2015
  • A telerobot offers a more engaging and enjoyable interaction with people at a distance by communicating via audio, video, expressive gestures, body pose and proxemics. To provide its potential benefits at a reasonable cost, this paper presents a telepresence robot system for video communication which can deliver speaker's head motion through its display stanchion. Head gestures such as nodding and head-shaking can give crucial information during conversation. We also can assume a speaker's eye-gaze, which is known as one of the key non-verbal signals for interaction, from his/her head pose. In order to develop an efficient head tracking method, a 3D cylinder-like head model is employed and the Harris corner detector is combined with the Lucas-Kanade optical flow that is known to be suitable for extracting 3D motion information of the model. Especially, a skin color-based face detection algorithm is proposed to achieve robust performance upon variant directions while maintaining reasonable computational cost. The performance of the proposed head tracking algorithm is verified through the experiments using BU's standard data sets. A design of robot platform is also described as well as the design of supporting systems such as video transmission and robot control interfaces.

Design of Computer Vision Interface by Recognizing Hand Motion (손동작 인식에 의한 컴퓨터 비전 인터페이스 설계)

  • Yun, Jin-Hyun;Lee, Chong-Ho
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.3
    • /
    • pp.1-10
    • /
    • 2010
  • As various interfacing devices for computational machines are being developed, a new HCI method using hand motion input is introduced. This interface method is a vision-based approach using a single camera for detecting and tracking hand movements. In the previous researches, only a skin color is used for detecting and tracking hand location. However, in our design, skin color and shape information are collectively considered. Consequently, detection ability of a hand increased. we proposed primary orientation edge descriptor for getting an edge information. This method uses only one hand model. Therefore, we do not need training processing time. This system consists of a detecting part and a tracking part for efficient processing. In tracking part, the system is quite robust on the orientation of the hand. The system is applied to recognize a hand written number in script style using DNAC algorithm. Performance of the proposed algorithm reaches 82% recognition ratio in detecting hand region and 90% in recognizing a written number in script style.

Learning efficiency checking system by measuring human motion detection (사람의 움직임 감지를 측정한 학습 능률 확인 시스템)

  • Kim, Sukhyun;Lee, Jinsung;Yu, Eunsang;Park, Seon-u;Kim, Eung-Tae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.290-293
    • /
    • 2021
  • In this paper, we implement a learning efficiency verification system to inspire learning motivation and help improve concentration by detecting the situation of the user studying. To this aim, data on learning attitude and concentration are measured by extracting the movement of the user's face or body through a real-time camera. The Jetson board was used to implement the real-time embedded system, and a convolutional neural network (CNN) was implemented for image recognition. After detecting the feature part of the object using a CNN, motion detection is performed. The captured image is shown in a GUI written in PYQT5, and data is collected by sending push messages when each of the actions is obstructed. In addition, each function can be executed on the main screen made with the GUI, and functions such as a statistical graph that calculates the collected data, To do list, and white noise are performed. Through learning efficiency checking system, various functions including data collection and analysis of targets were provided to users.

  • PDF

Human Skeleton Keypoints based Fall Detection using GRU (PoseNet과 GRU를 이용한 Skeleton Keypoints 기반 낙상 감지)

  • Kang, Yoon Kyu;Kang, Hee Yong;Weon, Dal Soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.127-133
    • /
    • 2021
  • A recent study of people physically falling focused on analyzing the motions of the falls using a recurrent neural network (RNN) and a deep learning approach to get good results from detecting 2D human poses from a single color image. In this paper, we investigate a detection method for estimating the position of the head and shoulder keypoints and the acceleration of positional change using the skeletal keypoints information extracted using PoseNet from an image obtained with a low-cost 2D RGB camera, increasing the accuracy of judgments about the falls. In particular, we propose a fall detection method based on the characteristics of post-fall posture in the fall motion-analysis method. A public data set was used to extract human skeletal features, and as a result of an experiment to find a feature extraction method that can achieve high classification accuracy, the proposed method showed a 99.8% success rate in detecting falls more effectively than a conventional, primitive skeletal data-use method.

Recognizing Human Facial Expressions and Gesture from Image Sequence (연속 영상에서의 얼굴표정 및 제스처 인식)

  • 한영환;홍승홍
    • Journal of Biomedical Engineering Research
    • /
    • v.20 no.4
    • /
    • pp.419-425
    • /
    • 1999
  • In this paper, we present an algorithm of real time facial expression and gesture recognition for image sequence on the gray level. A mixture algorithm of a template matching and knowledge based geometrical consideration of a face were adapted to locate the face area in input image. And optical flow method applied on the area to recognize facial expressions. Also, we suggest hand area detection algorithm form a background image by analyzing entropy in an image. With modified hand area detection algorithm, it was possible to recognize hand gestures from it. As a results, the experiments showed that the suggested algorithm was good at recognizing one's facial expression and hand gesture by detecting a dominant motion area on images without getting any limits from the background image.

  • PDF