• Title/Summary/Keyword: Human silhouette tracking

Search Result 18, Processing Time 0.021 seconds

ACMs-based Human Shape Extraction and Tracking System for Human Identification (개인 인증을 위한 활성 윤곽선 모델 기반의 사람 외형 추출 및 추적 시스템)

  • Park, Se-Hyun;Kwon, Kyung-Su;Kim, Eun-Yi;Kim, Hang-Joon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.5
    • /
    • pp.39-46
    • /
    • 2007
  • Research on human identification in ubiquitous environment has recently attracted a lot of attention. As one of those research, gait recognition is an efficient method of human identification using physical features of a walking person at a distance. In this paper, we present a human shape extraction and tracking for gait recognition using geodesic active contour models(GACMs) combined with mean shift algorithm The active contour models (ACMs) are very effective to deal with the non-rigid object because of its elastic property. However, they have the limitation that their performance is mainly dependent on the initial curve. To overcome this problem, we combine the mean shift algorithm with the traditional GACMs. The main idea is very simple. Before evolving using level set method, the initial curve in each frame is re-localized near the human region and is resized enough to include the targe region. This mechanism allows for reducing the number of iterations and for handling the large object motion. The proposed system is composed of human region detection and human shape tracking modules. In the human region detection module, the silhouette of a walking person is extracted by background subtraction and morphologic operation. Then human shape are correctly obtained by the GACMs with mean shift algorithm. In experimental results, the proposed method show that it is extracted and tracked efficiently accurate shape for gait recognition.

  • PDF

A methodology for evaluating human operator's fitness for duty in nuclear power plants

  • Choi, Moon Kyoung;Seong, Poong Hyun
    • Nuclear Engineering and Technology
    • /
    • v.52 no.5
    • /
    • pp.984-994
    • /
    • 2020
  • It is reported that about 20% of accidents at nuclear power plants in Korea and abroad are caused by human error. One of the main factors contributing to human error is fatigue, so it is necessary to prevent human errors that may occur when the task is performed in an improper state by grasping the status of the operator in advance. In this study, we propose a method of evaluating operator's fitness-for-duty (FFD) using various parameters including eye movement data, subjective fatigue ratings, and operator's performance. Parameters for evaluating FFD were selected through a literature survey. We performed experiments that test subjects who felt various levels of fatigue monitor information of indicators and diagnose a system malfunction. In order to find meaningful characteristics in measured data consisting of various parameters, hierarchical clustering analysis, an unsupervised machine-learning technique, is used. The characteristics of each cluster were analyzed; fitness-for-duty of each cluster was evaluated. The appropriateness of the number of clusters obtained through clustering analysis was evaluated using both the Elbow and Silhouette methods. Finally, it was statistically shown that the suggested methodology for evaluating FFD does not generate additional fatigue in subjects. Relevance to industry: The methodology for evaluating an operator's fitness for duty in advance is proposed, and it can prevent human errors that might be caused by inappropriate condition in nuclear industries.

Multiple Camera-Based Correspondence of Ground Foot for Human Motion Tracking (사람의 움직임 추적을 위한 다중 카메라 기반의 지면 위 발의 대응)

  • Seo, Dong-Wook;Chae, Hyun-Uk;Jo, Kang-Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.8
    • /
    • pp.848-855
    • /
    • 2008
  • In this paper, we describe correspondence among multiple images taken by multiple cameras. The correspondence among multiple views is an interesting problem which often appears in the application like visual surveillance or gesture recognition system. We use the principal axis and the ground plane homography to estimate foot of human. The principal axis belongs to the subtracted silhouette-based region of human using subtraction of the predetermined multiple background models with current image which includes moving person. For the calculation of the ground plane homography, we use landmarks on the ground plane in 3D space. Thus the ground plane homography means the relation of two common points in different views. In the normal human being, the foot of human has an exactly same position in the 3D space and we represent it to the intersection in this paper. The intersection occurs when the principal axis in an image crosses to the transformed ground plane from other image. However the positions of the intersection are different depend on camera views. Therefore we construct the correspondence that means the relationship between the intersection in current image and the transformed intersection from other image by homography. Those correspondences should confirm within a short distance measuring in the top viewed plane. Thus, we track a person by these corresponding points on the ground plane. Experimental result shows the accuracy of the proposed algorithm has almost 90% of detecting person for tracking based on correspondence of intersections.

Emergency Situation Detection using Images from Surveillance Camera and Mobile Robot Tracking System (감시카메라 영상기반 응급상황 탐지 및 이동로봇 추적 시스템)

  • Han, Tae-Woo;Seo, Yong-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.5
    • /
    • pp.101-107
    • /
    • 2009
  • In this paper, we describe a method of detecting emergency situation using images from surveillance cameras and propose a mobile robot tracking system for detailed examination of that situation. We are able to track a few persons and recognize their actions by an analyzing image sequences acquired from a fixed camera on all sides of buildings. When emergency situation is detected, a mobile robot moves and closely examines the place where the emergency is occurred. In order to recognize actions of a few persons using a sequence of images from surveillance cameras images, we need to track and manage a list of the regions which are regarded as human appearances. Interest regions are segmented from the background using MOG(Mixture of Gaussian) model and continuously tracked using appearance model in a single image. Then we construct a MHI(Motion History Image) for a tracked person using silhouette information of region blobs and model actions. Emergency situation is finally detected by applying these information to neural network. And we also implement mobile robot tracking technology using the distance between the person and a mobile robot.

  • PDF

Registration System of 3D Footwear data by Foot Movements (발의 움직임 추적에 의한 3차원 신발모델 정합 시스템)

  • Jung, Da-Un;Seo, Yung-Ho;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.6
    • /
    • pp.24-34
    • /
    • 2007
  • Application systems that easy to access a information have been developed by IT growth and a human life variation. In this paper, we propose a application system to register a 3D footwear model using a monocular camera. In General, a human motion analysis research to body movement. However, this system research a new method to use a foot movement. This paper present a system process and show experiment results. For projection to 2D foot plane from 3D shoe model data, we construct processes that a foot tracking, a projection expression and pose estimation process. This system divide from a 2D image analysis and a 3D pose estimation. First, for a foot tracking, we propose a method that find fixing point by a foot characteristic, and propose a geometric expression to relate 2D coordinate and 3D coordinate to use a monocular camera without a camera calibration. We make a application system, and measure distance error. Then, we confirmed a registration very well.

Automatic Person Identification using Multiple Cues

  • Swangpol, Danuwat;Chalidabhongse, Thanarat
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1202-1205
    • /
    • 2005
  • This paper describes a method for vision-based person identification that can detect, track, and recognize person from video using multiple cues: height and dressing colors. The method does not require constrained target's pose or fully frontal face image to identify the person. First, the system, which is connected to a pan-tilt-zoom camera, detects target using motion detection and human cardboard model. The system keeps tracking the moving target while it is trying to identify whether it is a human and identify who it is among the registered persons in the database. To segment the moving target from the background scene, we employ a version of background subtraction technique and some spatial filtering. Once the target is segmented, we then align the target with the generic human cardboard model to verify whether the detected target is a human. If the target is identified as a human, the card board model is also used to segment the body parts to obtain some salient features such as head, torso, and legs. The whole body silhouette is also analyzed to obtain the target's shape information such as height and slimness. We then use these multiple cues (at present, we uses shirt color, trousers color, and body height) to recognize the target using a supervised self-organization process. We preliminary tested the system on a set of 5 subjects with multiple clothes. The recognition rate is 100% if the person is wearing the clothes that were learned before. In case a person wears new dresses the system fail to identify. This means height is not enough to classify persons. We plan to extend the work by adding more cues such as skin color, and face recognition by utilizing the zoom capability of the camera to obtain high resolution view of face; then, evaluate the system with more subjects.

  • PDF

Key Pose-based Proposal Distribution for Upper Body Pose Tracking (상반신 포즈 추적을 위한 키포즈 기반 예측분포)

  • Oh, Chi-Min;Lee, Chil-Woo
    • The KIPS Transactions:PartB
    • /
    • v.18B no.1
    • /
    • pp.11-20
    • /
    • 2011
  • Pictorial Structures is known as an effective method that recognizes and tracks human poses. In this paper, the upper body pose is also tracked by PS and a particle filter(PF). PF is one of dynamic programming methods. But Markov chain-based dynamic motion model which is used in dynamic programming methods such as PF, couldn't predict effectively the highly articulated upper body motions. Therefore PF often fails to track upper body pose. In this paper we propose the key pose-based proposal distribution for proper particle prediction based on the similarities between key poses and an upper body silhouette. In the experimental results we confirmed our 70.51% improved performance comparing with a conventional method.

A Study on Hand Gesture Recognition with Low-Resolution Hand Images (저해상도 손 제스처 영상 인식에 대한 연구)

  • Ahn, Jung-Ho
    • Journal of Satellite, Information and Communications
    • /
    • v.9 no.1
    • /
    • pp.57-64
    • /
    • 2014
  • Recently, many human-friendly communication methods have been studied for human-machine interface(HMI) without using any physical devices. One of them is the vision-based gesture recognition that this paper deals with. In this paper, we define some gestures for interaction with objects in a predefined virtual world, and propose an efficient method to recognize them. For preprocessing, we detect and track the both hands, and extract their silhouettes from the low-resolution hand images captured by a webcam. We modeled skin color by two Gaussian distributions in RGB color space and use blob-matching method to detect and track the hands. Applying the foodfill algorithm we extracted hand silhouettes and recognize the hand shapes of Thumb-Up, Palm and Cross by detecting and analyzing their modes. Then, with analyzing the context of hand movement, we recognized five predefined one-hand or both-hand gestures. Assuming that one main user shows up for accurate hand detection, the proposed gesture recognition method has been proved its efficiency and accuracy in many real-time demos.