• Title/Summary/Keyword: 3D Object Tracking

Search Result 160, Processing Time 0.031 seconds

Detection of the co-planar feature points in the three dimensional space (3차원 공간에서 동일 평면 상에 존재하는 특징점 검출 기법)

  • Seok-Han Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.6
    • /
    • pp.499-508
    • /
    • 2023
  • In this paper, we propose a technique to estimate the coordinates of feature points existing on a 2D planar object in the three dimensional space. The proposed method detects multiple 3D features from the image, and excludes those which are not located on the plane. The proposed technique estimates the planar homography between the planar object in the 3D space and the camera image plane, and computes back-projection error of each feature point on the planar object. Then any feature points which have large error is considered as off-plane points and are excluded from the feature estimation phase. The proposed method is archived on the basis of the planar homography without any additional sensors or optimization algorithms. In the expretiments, it was confirmed that the speed of the proposed method is more than 40 frames per second. In addition, compared to the RGB-D camera, there was no significant difference in processing speed, and it was verified that the frame rate was unaffected even in the situation that the number of detected feature points continuously increased.

The Sensory-Motor Fusion System for Object Tracking (이동 물체를 추적하기 위한 감각 운동 융합 시스템 설계)

  • Lee, Sang-Hee;Wee, Jae-Woo;Lee, Chong-Ho
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.52 no.3
    • /
    • pp.181-187
    • /
    • 2003
  • For the moving objects with environmental sensors such as object tracking moving robot with audio and video sensors, environmental information acquired from sensors keep changing according to movements of objects. In such case, due to lack of adaptability and system complexity, conventional control schemes show limitations on control performance, and therefore, sensory-motor systems, which can intuitively respond to various types of environmental information, are desirable. And also, to improve the system robustness, it is desirable to fuse more than two types of sensory information simultaneously. In this paper, based on Braitenberg's model, we propose a sensory-motor based fusion system, which can trace the moving objects adaptively to environmental changes. With the nature of direct connecting structure, sensory-motor based fusion system can control each motor simultaneously, and the neural networks are used to fuse information from various types of sensors. And also, even if the system receives noisy information from one sensor, the system still robustly works with information from other sensors which compensates the noisy information through sensor fusion. In order to examine the performance, sensory-motor based fusion model is applied to object-tracking four-foot robot equipped with audio and video sensors. The experimental results show that the sensory-motor based fusion system can tract moving objects robustly with simpler control mechanism than model-based control approaches.

Stereo Object Tracking and Multiview image Reconstruction System Using Disparity Motion Vector (시차 움직임 벡터에 기반한 스데레오 물체추적 및 다시점 영상복원 시스템)

  • Ko Jung-Hwan;Kim Eun-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.2C
    • /
    • pp.166-174
    • /
    • 2006
  • In this paper, a new stereo object tracking system using the disparity motion vector is proposed. In the proposed method, the time-sequential disparity motion vector can be estimated from the disparity vectors which are extracted from the sequence of the stereo input image pair and then using these disparity motion vectors, the area where the target object is located and its location coordinate are detected from the input stereo image. Being based on this location data of the target object, the pan/tilt embedded in the stereo camera system can be controlled and as a result, stereo tracking of the target object can be possible. From some experiments with the 2 frames of the stereo image pairs having 256$\times$256 pixels, it is shown that the proposed stereo tracking system can adaptively track the target object with a low error ratio of about 3.05$\%$ on average between the detected and actual location coordinates of the target object.

Occluded Object Motion Estimation System based on Particle Filter with 3D Reconstruction

  • Ko, Kwang-Eun;Park, Jun-Heong;Park, Seung-Min;Kim, Jun-Yeup;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.1
    • /
    • pp.60-65
    • /
    • 2012
  • This paper presents a method for occluded object based motion estimation and tracking system in dynamic image sequences using particle filter with 3D reconstruction. A unique characteristic of this study is its ability to cope with partial occlusion based continuous motion estimation using particle filter inspired from the mirror neuron system in human brain. To update a prior knowledge about the shape or motion of objects, firstly, fundamental 3D reconstruction based occlusion tracing method is applied and object landmarks are determined. And optical flow based motion vector is estimated from the movement of the landmarks. When arbitrary partial occlusions are occurred, the continuous motion of the hidden parts of object can be estimated by particle filter with optical flow. The resistance of the resulting estimation to partial occlusions enables the more accurate detection and handling of more severe occlusions.

A Robust Object Detection and Tracking Method using RGB-D Model (RGB-D 모델을 이용한 강건한 객체 탐지 및 추적 방법)

  • Park, Seohee;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.18 no.4
    • /
    • pp.61-67
    • /
    • 2017
  • Recently, CCTV has been combined with areas such as big data, artificial intelligence, and image analysis to detect various abnormal behaviors and to detect and analyze the overall situation of objects such as people. Image analysis research for this intelligent video surveillance function is progressing actively. However, CCTV images using 2D information generally have limitations such as object misrecognition due to lack of topological information. This problem can be solved by adding the depth information of the object created by using two cameras to the image. In this paper, we perform background modeling using Mixture of Gaussian technique and detect whether there are moving objects by segmenting the foreground from the modeled background. In order to perform the depth information-based segmentation using the RGB information-based segmentation results, stereo-based depth maps are generated using two cameras. Next, the RGB-based segmented region is set as a domain for extracting depth information, and depth-based segmentation is performed within the domain. In order to detect the center point of a robustly segmented object and to track the direction, the movement of the object is tracked by applying the CAMShift technique, which is the most basic object tracking method. From the experiments, we prove the efficiency of the proposed object detection and tracking method using the RGB-D model.

Coordinate Calibration and Object Tracking of the ODVS (Omni-directional Image에서의 이동객체 좌표 보정 및 추적)

  • Park, Yong-Min;Nam, Hyun-Jung;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.408-413
    • /
    • 2005
  • This paper presents a technique which extracts a moving object from omni-directional images and estimates a real coordinates of the moving object using 3D parabolic coordinate transformation. To process real-time, a moving object was extracted by proposed Hue histogram Matching Algorithms. We demonstrate our proposed technique could extract a moving object strongly without effects of light changing and estimate approximation values of real coordinates with theoretical and experimental arguments.

  • PDF

Digital Twin and Visual Object Tracking using Deep Reinforcement Learning (심층 강화학습을 이용한 디지털트윈 및 시각적 객체 추적)

  • Park, Jin Hyeok;Farkhodov, Khurshedjon;Choi, Piljoo;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.145-156
    • /
    • 2022
  • Nowadays, the complexity of object tracking models among hardware applications has become a more in-demand duty to complete in various indeterminable environment tracking situations with multifunctional algorithm skills. In this paper, we propose a virtual city environment using AirSim (Aerial Informatics and Robotics Simulation - AirSim, CityEnvironment) and use the DQN (Deep Q-Learning) model of deep reinforcement learning model in the virtual environment. The proposed object tracking DQN network observes the environment using a deep reinforcement learning model that receives continuous images taken by a virtual environment simulation system as input to control the operation of a virtual drone. The deep reinforcement learning model is pre-trained using various existing continuous image sets. Since the existing various continuous image sets are image data of real environments and objects, it is implemented in 3D to track virtual environments and moving objects in them.

An Automatic Camera Tracking System for Video Surveillance

  • Lee, Sang-Hwa;Sharma, Siddharth;Lin, Sang-Lin;Park, Jong-Il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2010.07a
    • /
    • pp.42-45
    • /
    • 2010
  • This paper proposes an intelligent video surveillance system for human object tracking. The proposed system integrates the object extraction, human object recognition, face detection, and camera control. First, the object in the video signals is extracted using the background subtraction. Then, the object region is examined whether it is human or not. For this recognition, the region-based shape descriptor, angular radial transform (ART) in MPEG-7, is used to learn and train the shapes of human bodies. When it is decided that the object is human or something to be investigated, the face region is detected. Finally, the face or object region is tracked in the video, and the pan/tilt/zoom (PTZ) controllable camera tracks the moving object with the motion information of the object. This paper performs the simulation with the real CCTV cameras and their communication protocol. According to the experiments, the proposed system is able to track the moving object(human) automatically not only in the image domain but also in the real 3-D space. The proposed system reduces the human supervisors and improves the surveillance efficiency with the computer vision techniques.

  • PDF

Tracking Moving Object using Hausdorff Distance (Hausdorff 거리를 이용한 이동물체 추적)

  • Kim, Tea-Sik;Lee, Ju-Shin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.37 no.3
    • /
    • pp.79-87
    • /
    • 2000
  • In this paper, we propose a model based moving object tracking algorithm In dynamic scenes To adapt shape change of the moving object, the Hausdorff distance is applied as the measurement of similarity between model and image To reduce processing time, 2D logarithmic search method is applied for locate the position of moving object Experiments on a running vehicle and motorcycle, the result showed that the mean square error of real position and tracking result is 1150 and 1845; matching times are reduced average 1125times and 523 times than existing algorithm for vehicle image and motorcycle image, respectively It showed that the proposed algorithm could track the moving object accurately.

  • PDF

User classification and location tracking algorithm using deep learning (딥러닝을 이용한 사용자 구분 및 위치추적 알고리즘)

  • Park, Jung-tak;Lee, Sol;Park, Byung-Seo;Seo, Young-ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.78-79
    • /
    • 2022
  • In this paper, we propose a technique for tracking the classification and location of each user through body proportion analysis of the normalized skeletons of multiple users obtained using RGB-D cameras. To this end, each user's 3D skeleton is extracted from the 3D point cloud and body proportion information is stored. After that, the stored body proportion information is compared with the body proportion data output from the entire frame to propose a user classification and location tracking algorithm in the entire image.

  • PDF