• Title/Summary/Keyword: Human Tracking

Search Result 652, Processing Time 0.031 seconds

Object-Action and Risk-Situation Recognition Using Moment Change and Object Size's Ratio (모멘트 변화와 객체 크기 비율을 이용한 객체 행동 및 위험상황 인식)

  • Kwak, Nae-Joung;Song, Teuk-Seob
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.5
    • /
    • pp.556-565
    • /
    • 2014
  • This paper proposes a method to track object of real-time video transferred through single web-camera and to recognize risk-situation and human actions. The proposed method recognizes human basic actions that human can do in daily life and finds risk-situation such as faint and falling down to classify usual action and risk-situation. The proposed method models the background, obtains the difference image between input image and the modeled background image, extracts human object from input image, tracts object's motion and recognizes human actions. Tracking object uses the moment information of extracting object and the characteristic of object's recognition is moment's change and ratio of object's size between frames. Actions classified are four actions of walking, waling diagonally, sitting down, standing up among the most actions human do in daily life and suddenly falling down is classified into risk-situation. To test the proposed method, we applied it for eight participants from a video of a web-cam, classify human action and recognize risk-situation. The test result showed more than 97 percent recognition rate for each action and 100 percent recognition rate for risk-situation by the proposed method.

An Improved Approach for 3D Hand Pose Estimation Based on a Single Depth Image and Haar Random Forest

  • Kim, Wonggi;Chun, Junchul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.3136-3150
    • /
    • 2015
  • A vision-based 3D tracking of articulated human hand is one of the major issues in the applications of human computer interactions and understanding the control of robot hand. This paper presents an improved approach for tracking and recovering the 3D position and orientation of a human hand using the Kinect sensor. The basic idea of the proposed method is to solve an optimization problem that minimizes the discrepancy in 3D shape between an actual hand observed by Kinect and a hypothesized 3D hand model. Since each of the 3D hand pose has 23 degrees of freedom, the hand articulation tracking needs computational excessive burden in minimizing the 3D shape discrepancy between an observed hand and a 3D hand model. For this, we first created a 3D hand model which represents the hand with 17 different parts. Secondly, Random Forest classifier was trained on the synthetic depth images generated by animating the developed 3D hand model, which was then used for Haar-like feature-based classification rather than performing per-pixel classification. Classification results were used for estimating the joint positions for the hand skeleton. Through the experiment, we were able to prove that the proposed method showed improvement rates in hand part recognition and a performance of 20-30 fps. The results confirmed its practical use in classifying hand area and successfully tracked and recovered the 3D hand pose in a real time fashion.

Real-Time Face Tracking Algorithm Robust to illumination Variations (조명 변화에 강인한 실시간 얼굴 추적 알고리즘)

  • Lee, Yong-Beom;You, Bum-Jae;Lee, Seong-Whan;Kim, Kwang-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.3037-3040
    • /
    • 2000
  • Real-Time object tracking has emerged as an important component in several application areas including machine vision. surveillance. Human-Computer Interaction. image-based control. and so on. And there has been developed various algorithms for a long time. But in many cases. they have showed limited results under uncontrolled situation such as illumination changes or cluttered background. In this paper. we present a novel. computationally efficient algorithm for tracking human face robustly under illumination changes and cluttered backgrounds. Previous algorithms usually defines color model as a 2D membership function in a color space without consideration for illumination changes. Our new algorithm developed here. however. constructs a 3D color model by analysing plenty of images acquired under various illumination conditions. The algorithm described is applied to a mobile head-eye robot and experimented under various uncontrolled environments. It can track an human face more than 100 frames per second excluding image acquisition time.

  • PDF

Design and Implementation of Driving System for Face Tracking Camera using Fuzzy Control (퍼지제어를 이용한 얼굴추적 카메라 구동 시스템의 설계 및 구현)

  • 이종배;임준홍
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.40 no.3
    • /
    • pp.127-134
    • /
    • 2003
  • In this paper, the speed control problem of moving camera is investigated for tracking the movement of the human-face. The camera system with pan-tilt mechanism sends an image to PC and PC sends back tracking coordinate to the camera. Then the camera tracks a human face in real time. The speed of the stepping motors for moving the camera must be controlled to the target region fast enough and smoothly, In this paper, a fuzzy logic controller is proposed for driving step motors. By creating driving acceleration and deceleration speed Profile, the speed of the motors is controled fast and smoothly. Experiments are performed to show the effectiveness of the proposed method.

Real-Time Eye Tracking Using IR Stereo Camera for Indoor and Outdoor Environments

  • Lim, Sungsoo;Lee, Daeho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.8
    • /
    • pp.3965-3983
    • /
    • 2017
  • We propose a novel eye tracking method that can estimate 3D world coordinates using an infrared (IR) stereo camera for indoor and outdoor environments. This method first detects dark evidences such as eyes, eyebrows and mouths by fast multi-level thresholding. Among these evidences, eye pair evidences are detected by evidential reasoning and geometrical rules. For robust accuracy, two classifiers based on multiple layer perceptron (MLP) using gradient local binary patterns (GLBPs) verify whether the detected evidences are real eye pairs or not. Finally, the 3D world coordinates of detected eyes are calculated by region-based stereo matching. Compared with other eye detection methods, the proposed method can detect the eyes of people wearing sunglasses due to the use of the IR spectrum. Especially, when people are in dark environments such as driving at nighttime, driving in an indoor carpark, or passing through a tunnel, human eyes can be robustly detected because we use active IR illuminators. In the experimental results, it is shown that the proposed method can detect eye pairs with high performance in real-time under variable illumination conditions. Therefore, the proposed method can contribute to human-computer interactions (HCIs) and intelligent transportation systems (ITSs) applications such as gaze tracking, windshield head-up display and drowsiness detection.

Real-Time Tracking of Human Location and Motion using Cameras in a Ubiquitous Smart Home

  • Shin, Dong-Kyoo;Shin, Dong-Il;Nguyen, Quoc Cuong;Park, Se-Young
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.3 no.1
    • /
    • pp.84-95
    • /
    • 2009
  • The ubiquitous smart home is the home of the future, which exploits context information from both the human and the home environment, providing an automatic home service for the human. Human location and motion are the most important contexts in the ubiquitous smart home. In this paper, we present a real-time human tracker that predicts human location and motion for the ubiquitous smart home. The system uses four network cameras for real-time human tracking. This paper explains the architecture of the real-time human tracker, and proposes an algorithm for predicting human location and motion. To detect human location, three kinds of images are used: $IMAGE_1$ - empty room image, $IMAGE_2$ - image of furniture and home appliances, $IMAGE_3$ - image of $IMAGE_2$ and the human. The real-time human tracker decides which specific furniture or home appliance the human is associated with, via analysis of three images, and predicts human motion using a support vector machine (SVM). The performance experiment of the human's location, which uses three images, lasted an average of 0.037 seconds. The SVM feature of human motion recognition is decided from the pixel number by the array line of the moving object. We evaluated each motion 1,000 times. The average accuracy of all types of motion was 86.5%.

AIS 기반 관제의 문제점 보완 및 모니터 화면 개선을 통한 관제향상 방안

  • Kim, Yeong-Sin;Ha, Yun-Ju;Im, Pyo-Taek;Kim, Yu-Sun
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2012.06a
    • /
    • pp.573-575
    • /
    • 2012
  • 기존 RADAR 기반의 VTS에 AIS를 연계 집약하면서 예측 불가능한 데이터 전송률에도 동일선박으로부터의 AIS 및 RADAR 데이터는 상관관계를 유지하면서 물표에 대한 Tracking이 지속적으로 이루어져야 하지만 AIS 신호 Lost시 RADAR Tracking 자동 전환이 안 되는 경우가 많이 발생하고 있다. 또한 3개의 VTS 모니터 화면에 각각 다른 Scale과 다른 관제구역이 디스플레이 됨으로써 특히 모니터 가장자리 부근의 관제구역은 사각지대로 관제사의 집중도가 떨어질 수밖에 없다. 이러한 문제점들은 관제사의 Traffic Image구성 및 Situational Awareness를 방해하는 요소로 작용하며 사고의 개연성을 높이고 있다. 본 연구에서는 VTS 모니터상의 화면 재구성 방법을 통해서 관제사의 SA를 돕고, AIS-RADAR Tracking 알고리즘 보완을 통한 Target Tracking의 안정성을 확보하고, 교육 훈련을 통해서 AIS특성과 Error현상에 대한 관제사들이 충분히 이해하도록 하여 관제업무의 향상을 기하는 방안을 제시하였다.

  • PDF

Real-time face tracking for high-resolution intelligent surveillance system (고해상도 지능형 감시시스템을 위한 실시간 얼굴영역 추적)

  • 권오현;김상진;김영욱;백준기
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.317-320
    • /
    • 2003
  • In this paper, we present real-time, accurate face region detection and tracking technique for an intelligent surveillance system. It is very important to obtain the high-resolution images, which enables accurate identification of an object-of-interest. Conventional surveillance or security systems, however, usually provide poor image quality because they use one or more fixed cameras and keep recording scenes without any clue. We implemented a real-time surveillance system that tracks a moving person using pan-tilt-zoom (PTZ) cameras. While tracking, the region-of-interest (ROI) can be obtained by using a low-pass filter and background subtraction. Color information in the ROI is updated to extract features for optimal tracking and zooming. The experiment with real human faces showed highly acceptable results in the sense of both accuracy and computational efficiency.

  • PDF

Human-Robot Interaction in Real Environments by Audio-Visual Integration

  • Kim, Hyun-Don;Choi, Jong-Suk;Kim, Mun-Sang
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.1
    • /
    • pp.61-69
    • /
    • 2007
  • In this paper, we developed not only a reliable sound localization system including a VAD(Voice Activity Detection) component using three microphones but also a face tracking system using a vision camera. Moreover, we proposed a way to integrate three systems in the human-robot interaction to compensate errors in the localization of a speaker and to reject unnecessary speech or noise signals entering from undesired directions effectively. For the purpose of verifying our system's performances, we installed the proposed audio-visual system in a prototype robot, called IROBAA(Intelligent ROBot for Active Audition), and demonstrated how to integrate the audio-visual system.

A Dangerous Situation Recognition System Using Human Behavior Analysis (인간 행동 분석을 이용한 위험 상황 인식 시스템 구현)

  • Park, Jun-Tae;Han, Kyu-Phil;Park, Yang-Woo
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.3
    • /
    • pp.345-354
    • /
    • 2021
  • Recently, deep learning-based image recognition systems have been adopted to various surveillance environments, but most of them are still picture-type object recognition methods, which are insufficient for the long term temporal analysis and high-dimensional situation management. Therefore, we propose a method recognizing the specific dangerous situation generated by human in real-time, and utilizing deep learning-based object analysis techniques. The proposed method uses deep learning-based object detection and tracking algorithms in order to recognize the situations such as 'trespassing', 'loitering', and so on. In addition, human's joint pose data are extracted and analyzed for the emergent awareness function such as 'falling down' to notify not only in the security but also in the emergency environmental utilizations.