• Title/Summary/Keyword: Occluded Object Tracking

Search Result 23, Processing Time 0.029 seconds

Relation Tracking of Occluded objects using a Perspective Depth (투시적 깊이를 활용한 중첩된 객체의 관계추적)

  • Park, Hwa-Jin
    • Journal of Digital Contents Society
    • /
    • v.16 no.6
    • /
    • pp.901-908
    • /
    • 2015
  • Networked multiple CCTV systems are required to effectively trace down long-term abnormal behaviors, such as stalking. However, the occluding event, which often takes place during tracking, may result in critical errors of cessation of tracing, or tracking wrong objects. Thus, utilizing installed regular CCTVs, this study aims to trace the relation tracking in a continuous manner by recognizing distinctive features of each object and its perspective projection depth to address the problem with occluded objects. In addition, this study covers occlusion event between the stationary background objects, such as street lights, or walls, and the targeted object.

Occluded Object Motion Estimation System based on Particle Filter with 3D Reconstruction

  • Ko, Kwang-Eun;Park, Jun-Heong;Park, Seung-Min;Kim, Jun-Yeup;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.1
    • /
    • pp.60-65
    • /
    • 2012
  • This paper presents a method for occluded object based motion estimation and tracking system in dynamic image sequences using particle filter with 3D reconstruction. A unique characteristic of this study is its ability to cope with partial occlusion based continuous motion estimation using particle filter inspired from the mirror neuron system in human brain. To update a prior knowledge about the shape or motion of objects, firstly, fundamental 3D reconstruction based occlusion tracing method is applied and object landmarks are determined. And optical flow based motion vector is estimated from the movement of the landmarks. When arbitrary partial occlusions are occurred, the continuous motion of the hidden parts of object can be estimated by particle filter with optical flow. The resistance of the resulting estimation to partial occlusions enables the more accurate detection and handling of more severe occlusions.

Occluded Object Motion Tracking Method based on Combination of 3D Reconstruction and Optical Flow Estimation (3차원 재구성과 추정된 옵티컬 플로우 기반 가려진 객체 움직임 추적방법)

  • Park, Jun-Heong;Park, Seung-Min;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.5
    • /
    • pp.537-542
    • /
    • 2011
  • A mirror neuron is a neuron fires both when an animal acts and when the animal observes the same action performed by another. We propose a method of 3D reconstruction for occluded object motion tracking like Mirror Neuron System to fire in hidden condition. For modeling system that intention recognition through fire effect like Mirror Neuron System, we calculate depth information using stereo image from a stereo camera and reconstruct three dimension data. Movement direction of object is estimated by optical flow with three-dimensional image data created by three dimension reconstruction. For three dimension reconstruction that enables tracing occluded part, first, picture data was get by stereo camera. Result of optical flow is made be robust to noise by the kalman filter estimation algorithm. Image data is saved as history from reconstructed three dimension image through motion tracking of object. When whole or some part of object is disappeared form stereo camera by other objects, it is restored to bring image date form history of saved past image and track motion of object.

Tracking and Face Recognition of Multiple People Based on GMM, LKT and PCA

  • Lee, Won-Oh;Park, Young-Ho;Lee, Eui-Chul;Lee, Hee-Kyung;Park, Kang-Ryoung
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.4
    • /
    • pp.449-471
    • /
    • 2012
  • In intelligent surveillance systems, it is required to robustly track multiple people. Most of the previous studies adopted a Gaussian mixture model (GMM) for discriminating the object from the background. However, it has a weakness that its performance is affected by illumination variations and shadow regions can be merged with the object. And when two foreground objects overlap, the GMM method cannot correctly discriminate the occluded regions. To overcome these problems, we propose a new method of tracking and identifying multiple people. The proposed research is novel in the following three ways compared to previous research: First, the illuminative variations and shadow regions are reduced by an illumination normalization based on the median and inverse filtering of the L*a*b* image. Second, the multiple occluded and overlapped people are tracked by combining the GMM in the still image and the Lucas-Kanade-Tomasi (LKT) method in successive images. Third, with the proposed human tracking and the existing face detection & recognition methods, the tracked multiple people are successfully identified. The experimental results show that the proposed method could track and recognize multiple people with accuracy.

Study on Underwater Object Tracking Based on Real-Time Recurrent Regression Networks Using Multi-beam Sonar Images (실시간 순환 신경망 기반의 멀티빔 소나 이미지를 이용한 수중 물체의 추적에 관한 연구)

  • Lee, Eon-ho;Lee, Yeongjun;Choi, Jinwoo;Lee, Sejin
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.1
    • /
    • pp.8-15
    • /
    • 2020
  • This research is a case study of underwater object tracking based on real-time recurrent regression networks (Re3). Re3 has the concept of generic object tracking. Because of these characteristics, it is very effective to apply this model to unclear underwater sonar images. The model also an pursues object tracking method, thus it solves the problem of calculating load that may be limited when object detection models are used, unlike the tracking models. The model is also highly intuitive, so it has excellent continuity of tracking even if the object being tracked temporarily becomes partially occluded or faded. There are 4 types of the dataset using multi-beam sonar images: including (a) dummy object floated at the testbed; (b) dummy object settled at the bottom of the sea; (c) tire object settled at the bottom of the testbed; (d) multi-objects settled at the bottom of the testbed. For this study, the experiments were conducted to obtain underwater sonar images from the sea and underwater testbed, and the validity of using noisy underwater sonar images was tested to be able to track objects robustly.

Object Tracking for Elimination using LOD Edge Maps Generated from Canny Edge Maps (캐니 에지 맵을 LOD로 변환한 맵을 이용하여 객체 소거를 위한 추적)

  • Jang, Young-Dae;Park, Ji-Hun
    • Annual Conference of KIPS
    • /
    • 2007.05a
    • /
    • pp.333-336
    • /
    • 2007
  • We propose a simple method for tracking a nonparameterized subject contour in a single video stream with a moving camera and changing background. Then we present a method to eliminate the tracked contour object by replacing with the background scene we get from other frame. Our method consists of two parts: first we track the object using LOD (Level-of-Detail) canny edge maps, then we generate background of each image frame and replace the tracked object in a scene by a background image from other frame that is not occluded by the tracked object. Our tracking method is based on level-of-detail (LOD) modified Canny edge maps and graph-based routing operations on the LOD maps. To reduce side-effects because of irrelevant edges, we start our basic tracking by using strong Canny edges generated from large image intensity gradients of an input image. We get more edge pixels along LOD hierarchy. LOD Canny edge pixels become nodes in routing, and LOD values of adjacent edge pixels determine routing costs between the nodes. We find the best route to follow Canny edge pixels favoring stronger Canny edge pixels. Our accurate tracking is based on reducing effects from irrelevant edges by selecting the stronger edge pixels, thereby relying on the current frame edge pixel as much as possible. This approach is based on computing camera motion. Our experimental results show that our method works nice for moderate camera movement with small object shape changes.

Object Tracking Using Information Fusion (정보융합을 이용한 객체 추적)

  • Lee, Jin-Hyung;Jo, Seong-Won;Kim, Jae-Min;Chung, Sun-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.5
    • /
    • pp.666-671
    • /
    • 2008
  • In this paper, we propose a new method for tracking objects continously and successively based on fusion of region information, color information and motion template when multiple objects are occluded and splitted. For each frame, color template is updated and compared with the present object. The predicted region, dynamic template and color histogram are used to classify the objects. The vertical histogram of the silhouettes is analyzed to determine whether or not the foreground region contains multiple objects. The proposed method can recognize more correctly the objects to be tracked.

A Robust Algorithm for Tracking Non-rigid Objects Using Deformed Template and Level-Set Theory (템플릿 변형과 Level-Set이론을 이용한 비강성 객체 추적 알고리즘)

  • 김종렬;나현태;문영식
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.40 no.3
    • /
    • pp.127-136
    • /
    • 2003
  • In this paper, we propose a robust object tracking algorithm based on model and edge, using deformed template and Level-Set theory. The proposed algorithm can track objects in case of background variation, object flexibility and occlusions. First we design a new potential difference energy function(PDEF) composed of two terms including inter-region distance and edge values. This function is utilized to estimate and refine the object shape. The first step is to approximately estimate the shape and location of template object based on the assumption that the object changes its shape according to the affine transform. The second step is a refinement of the object shape to fit into the real object accurately, by using the potential energy map and the modified Level-Set speed function. The experimental results show that the proposed algorithm can track non-rigid objects under various environments, such as largely flexible objects, objects with large variation in the backgrounds, and occluded objects.

Robust AAM-based Face Tracking with Occlusion Using SIFT Features (SIFT 특징을 이용하여 중첩상황에 강인한 AAM 기반 얼굴 추적)

  • Eom, Sung-Eun;Jang, Jun-Su
    • The KIPS Transactions:PartB
    • /
    • v.17B no.5
    • /
    • pp.355-362
    • /
    • 2010
  • Face tracking is to estimate the motion of a non-rigid face together with a rigid head in 3D, and plays important roles in higher levels such as face/facial expression/emotion recognition. In this paper, we propose an AAM-based face tracking algorithm. AAM has been widely used to segment and track deformable objects, but there are still many difficulties. Particularly, it often tends to diverge or converge into local minima when a target object is self-occluded, partially or completely occluded. To address this problem, we utilize the scale invariant feature transform (SIFT). SIFT is an effective method for self and partial occlusion because it is able to find correspondence between feature points under partial loss. And it enables an AAM to continue to track without re-initialization in complete occlusions thanks to the good performance of global matching. We also register and use the SIFT features extracted from multi-view face images during tracking to effectively track a face across large pose changes. Our proposed algorithm is validated by comparing other algorithms under the above 3 kinds of occlusions.

Object Tracking And Elimination Using Lod Edge Maps Generated from Modified Canny Edge Maps (수정된 캐니 에지 맵으로부터 만들어진 LOD 에지 맵을 이용한 물체 추적 및 소거)

  • Park, Ji-Hun;Jang, Yung-Dae;Lee, Dong-Hun;Lee, Jong-Kwan;Ham, Mi-Ok
    • The KIPS Transactions:PartB
    • /
    • v.14B no.3 s.113
    • /
    • pp.171-182
    • /
    • 2007
  • We propose a simple method for tracking a nonparameterized subject contour in a single video stream with a moving camera and changing background. Then we present a method to eliminate the tracked contour object by replacing with the background scene we get from other frame. First we track the object using LOD (Level-of-Detail) canny edge maps, then we generate background of each image frame and replace the tracked object in a scene by a background image from other frame that is not occluded by the tracked object. Our tracking method is based on level-of-detail (LOD) modified Canny edge maps and graph-based routing operations on the LOD maps. We get more edge pixels along LOD hierarchy. Our accurate tracking is based on reducing effects from irrelevant edges by selecting the stronger edge pixels, thereby relying on the current frame edge pixel as much as possible. The first frame background scene is determined by camera motion, camera movement between two image frames, and other background scenes are computed from the previous background scenes. The computed background scenes are used to eliminate the tracked object from the scene. In order to remove the tracked object, we generate approximated background for the first frame. Background images for subsequent frames are based on the first frame background or previous frame images. This approach is based on computing camera motion. Our experimental results show that our method works nice for moderate camera movement with small object shape changes.