• 제목/요약/키워드: Human Tracking

검색결과 652건 처리시간 0.028초

모멘트 변화와 객체 크기 비율을 이용한 객체 행동 및 위험상황 인식 (Object-Action and Risk-Situation Recognition Using Moment Change and Object Size's Ratio)

  • 곽내정;송특섭
    • 한국멀티미디어학회논문지
    • /
    • 제17권5호
    • /
    • pp.556-565
    • /
    • 2014
  • This paper proposes a method to track object of real-time video transferred through single web-camera and to recognize risk-situation and human actions. The proposed method recognizes human basic actions that human can do in daily life and finds risk-situation such as faint and falling down to classify usual action and risk-situation. The proposed method models the background, obtains the difference image between input image and the modeled background image, extracts human object from input image, tracts object's motion and recognizes human actions. Tracking object uses the moment information of extracting object and the characteristic of object's recognition is moment's change and ratio of object's size between frames. Actions classified are four actions of walking, waling diagonally, sitting down, standing up among the most actions human do in daily life and suddenly falling down is classified into risk-situation. To test the proposed method, we applied it for eight participants from a video of a web-cam, classify human action and recognize risk-situation. The test result showed more than 97 percent recognition rate for each action and 100 percent recognition rate for risk-situation by the proposed method.

An Improved Approach for 3D Hand Pose Estimation Based on a Single Depth Image and Haar Random Forest

  • Kim, Wonggi;Chun, Junchul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권8호
    • /
    • pp.3136-3150
    • /
    • 2015
  • A vision-based 3D tracking of articulated human hand is one of the major issues in the applications of human computer interactions and understanding the control of robot hand. This paper presents an improved approach for tracking and recovering the 3D position and orientation of a human hand using the Kinect sensor. The basic idea of the proposed method is to solve an optimization problem that minimizes the discrepancy in 3D shape between an actual hand observed by Kinect and a hypothesized 3D hand model. Since each of the 3D hand pose has 23 degrees of freedom, the hand articulation tracking needs computational excessive burden in minimizing the 3D shape discrepancy between an observed hand and a 3D hand model. For this, we first created a 3D hand model which represents the hand with 17 different parts. Secondly, Random Forest classifier was trained on the synthetic depth images generated by animating the developed 3D hand model, which was then used for Haar-like feature-based classification rather than performing per-pixel classification. Classification results were used for estimating the joint positions for the hand skeleton. Through the experiment, we were able to prove that the proposed method showed improvement rates in hand part recognition and a performance of 20-30 fps. The results confirmed its practical use in classifying hand area and successfully tracked and recovered the 3D hand pose in a real time fashion.

조명 변화에 강인한 실시간 얼굴 추적 알고리즘 (Real-Time Face Tracking Algorithm Robust to illumination Variations)

  • 이용범;유범재;이성환;김광배
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2000년도 하계학술대회 논문집 D
    • /
    • pp.3037-3040
    • /
    • 2000
  • Real-Time object tracking has emerged as an important component in several application areas including machine vision. surveillance. Human-Computer Interaction. image-based control. and so on. And there has been developed various algorithms for a long time. But in many cases. they have showed limited results under uncontrolled situation such as illumination changes or cluttered background. In this paper. we present a novel. computationally efficient algorithm for tracking human face robustly under illumination changes and cluttered backgrounds. Previous algorithms usually defines color model as a 2D membership function in a color space without consideration for illumination changes. Our new algorithm developed here. however. constructs a 3D color model by analysing plenty of images acquired under various illumination conditions. The algorithm described is applied to a mobile head-eye robot and experimented under various uncontrolled environments. It can track an human face more than 100 frames per second excluding image acquisition time.

  • PDF

퍼지제어를 이용한 얼굴추적 카메라 구동 시스템의 설계 및 구현 (Design and Implementation of Driving System for Face Tracking Camera using Fuzzy Control)

  • 이종배;임준홍
    • 전자공학회논문지SC
    • /
    • 제40권3호
    • /
    • pp.127-134
    • /
    • 2003
  • 본 논문에서는 퍼지제어를 이용하여 사람의 얼굴을 추적하는 카메라를 구동하는 시스템을 구현한다. 팬틸트(Pan Tilt)구조를 가진 카메라 시스템은 먼저 영상을 PC로 보내고 PC에서는 추적알고리즘에 의한 추적 좌표를 다시 카메라에 전송하면 카메라는 목표 얼굴을 실시간으로 추적하는 방식으로 되어 있다. 카메라를 구동하는 2축의 스텝모터는 PC에서 전송된 목표 좌표로 최대한 빠르고 또한 부드럽게 제어되어야 한다. 이를 위해서 본 논문에서는 퍼지제어기를 제안하여 구동용 가·감속 주파수를 만들고 두 축의 스텝모터를 빠르면서도 부드럽게 제어한다 그리고 본 제안 방식의 효율성을 검증하기 위하여 실험 장치를 제작하고 실험을 수행한다.

Real-Time Eye Tracking Using IR Stereo Camera for Indoor and Outdoor Environments

  • Lim, Sungsoo;Lee, Daeho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권8호
    • /
    • pp.3965-3983
    • /
    • 2017
  • We propose a novel eye tracking method that can estimate 3D world coordinates using an infrared (IR) stereo camera for indoor and outdoor environments. This method first detects dark evidences such as eyes, eyebrows and mouths by fast multi-level thresholding. Among these evidences, eye pair evidences are detected by evidential reasoning and geometrical rules. For robust accuracy, two classifiers based on multiple layer perceptron (MLP) using gradient local binary patterns (GLBPs) verify whether the detected evidences are real eye pairs or not. Finally, the 3D world coordinates of detected eyes are calculated by region-based stereo matching. Compared with other eye detection methods, the proposed method can detect the eyes of people wearing sunglasses due to the use of the IR spectrum. Especially, when people are in dark environments such as driving at nighttime, driving in an indoor carpark, or passing through a tunnel, human eyes can be robustly detected because we use active IR illuminators. In the experimental results, it is shown that the proposed method can detect eye pairs with high performance in real-time under variable illumination conditions. Therefore, the proposed method can contribute to human-computer interactions (HCIs) and intelligent transportation systems (ITSs) applications such as gaze tracking, windshield head-up display and drowsiness detection.

Real-Time Tracking of Human Location and Motion using Cameras in a Ubiquitous Smart Home

  • Shin, Dong-Kyoo;Shin, Dong-Il;Nguyen, Quoc Cuong;Park, Se-Young
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제3권1호
    • /
    • pp.84-95
    • /
    • 2009
  • The ubiquitous smart home is the home of the future, which exploits context information from both the human and the home environment, providing an automatic home service for the human. Human location and motion are the most important contexts in the ubiquitous smart home. In this paper, we present a real-time human tracker that predicts human location and motion for the ubiquitous smart home. The system uses four network cameras for real-time human tracking. This paper explains the architecture of the real-time human tracker, and proposes an algorithm for predicting human location and motion. To detect human location, three kinds of images are used: $IMAGE_1$ - empty room image, $IMAGE_2$ - image of furniture and home appliances, $IMAGE_3$ - image of $IMAGE_2$ and the human. The real-time human tracker decides which specific furniture or home appliance the human is associated with, via analysis of three images, and predicts human motion using a support vector machine (SVM). The performance experiment of the human's location, which uses three images, lasted an average of 0.037 seconds. The SVM feature of human motion recognition is decided from the pixel number by the array line of the moving object. We evaluated each motion 1,000 times. The average accuracy of all types of motion was 86.5%.

AIS 기반 관제의 문제점 보완 및 모니터 화면 개선을 통한 관제향상 방안

  • 김영신;하윤주;임표택;김유순
    • 한국항해항만학회:학술대회논문집
    • /
    • 한국항해항만학회 2012년도 춘계학술대회
    • /
    • pp.573-575
    • /
    • 2012
  • 기존 RADAR 기반의 VTS에 AIS를 연계 집약하면서 예측 불가능한 데이터 전송률에도 동일선박으로부터의 AIS 및 RADAR 데이터는 상관관계를 유지하면서 물표에 대한 Tracking이 지속적으로 이루어져야 하지만 AIS 신호 Lost시 RADAR Tracking 자동 전환이 안 되는 경우가 많이 발생하고 있다. 또한 3개의 VTS 모니터 화면에 각각 다른 Scale과 다른 관제구역이 디스플레이 됨으로써 특히 모니터 가장자리 부근의 관제구역은 사각지대로 관제사의 집중도가 떨어질 수밖에 없다. 이러한 문제점들은 관제사의 Traffic Image구성 및 Situational Awareness를 방해하는 요소로 작용하며 사고의 개연성을 높이고 있다. 본 연구에서는 VTS 모니터상의 화면 재구성 방법을 통해서 관제사의 SA를 돕고, AIS-RADAR Tracking 알고리즘 보완을 통한 Target Tracking의 안정성을 확보하고, 교육 훈련을 통해서 AIS특성과 Error현상에 대한 관제사들이 충분히 이해하도록 하여 관제업무의 향상을 기하는 방안을 제시하였다.

  • PDF

고해상도 지능형 감시시스템을 위한 실시간 얼굴영역 추적 (Real-time face tracking for high-resolution intelligent surveillance system)

  • 권오현;김상진;김영욱;백준기
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 신호처리소사이어티 추계학술대회 논문집
    • /
    • pp.317-320
    • /
    • 2003
  • In this paper, we present real-time, accurate face region detection and tracking technique for an intelligent surveillance system. It is very important to obtain the high-resolution images, which enables accurate identification of an object-of-interest. Conventional surveillance or security systems, however, usually provide poor image quality because they use one or more fixed cameras and keep recording scenes without any clue. We implemented a real-time surveillance system that tracks a moving person using pan-tilt-zoom (PTZ) cameras. While tracking, the region-of-interest (ROI) can be obtained by using a low-pass filter and background subtraction. Color information in the ROI is updated to extract features for optimal tracking and zooming. The experiment with real human faces showed highly acceptable results in the sense of both accuracy and computational efficiency.

  • PDF

Human-Robot Interaction in Real Environments by Audio-Visual Integration

  • Kim, Hyun-Don;Choi, Jong-Suk;Kim, Mun-Sang
    • International Journal of Control, Automation, and Systems
    • /
    • 제5권1호
    • /
    • pp.61-69
    • /
    • 2007
  • In this paper, we developed not only a reliable sound localization system including a VAD(Voice Activity Detection) component using three microphones but also a face tracking system using a vision camera. Moreover, we proposed a way to integrate three systems in the human-robot interaction to compensate errors in the localization of a speaker and to reject unnecessary speech or noise signals entering from undesired directions effectively. For the purpose of verifying our system's performances, we installed the proposed audio-visual system in a prototype robot, called IROBAA(Intelligent ROBot for Active Audition), and demonstrated how to integrate the audio-visual system.

인간 행동 분석을 이용한 위험 상황 인식 시스템 구현 (A Dangerous Situation Recognition System Using Human Behavior Analysis)

  • 박준태;한규필;박양우
    • 한국멀티미디어학회논문지
    • /
    • 제24권3호
    • /
    • pp.345-354
    • /
    • 2021
  • Recently, deep learning-based image recognition systems have been adopted to various surveillance environments, but most of them are still picture-type object recognition methods, which are insufficient for the long term temporal analysis and high-dimensional situation management. Therefore, we propose a method recognizing the specific dangerous situation generated by human in real-time, and utilizing deep learning-based object analysis techniques. The proposed method uses deep learning-based object detection and tracking algorithms in order to recognize the situations such as 'trespassing', 'loitering', and so on. In addition, human's joint pose data are extracted and analyzed for the emergent awareness function such as 'falling down' to notify not only in the security but also in the emergency environmental utilizations.