• 제목/요약/키워드: Model based Object Tracking

검색결과 234건 처리시간 0.019초

인간의 지각적인 시스템을 기반으로 한 연속된 영상 내에서의 움직임 영역 결정 및 추적 (Object Motion Detection and Tracking Based on Human Perception System)

  • 정미영;최석림
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅳ
    • /
    • pp.2120-2123
    • /
    • 2003
  • This paper presents the moving object detection and tracking algorithm using edge information base on human perceptual system The human visual system recognizes shapes and objects easily and rapidly. It's believed that perceptual organization plays on important role in human perception. It presents edge model(GCS) base on extracted feature by perceptual organization principal and extract edge information by definition of the edge model. Through such human perception system I have introduced the technique in which the computers would recognize the moving object from the edge information just like humans would recognize the moving object precisely.

  • PDF

모델 기반 카메라 추적에서 3차원 객체 모델링의 허용 오차 범위 분석 (Tolerance Analysis on 3-D Object Modeling Errors in Model-Based Camera Tracking)

  • 이은주;서병국;박종일
    • 방송공학회논문지
    • /
    • 제18권1호
    • /
    • pp.1-9
    • /
    • 2013
  • 모델 기반 카메라 추적에서 추적을 위해 사용되는 3차원 객체 모델의 정확도는 매우 중요하다. 하지만 3차원 객체의 실측 모델링은 일반적으로 정교한 작업을 요구할 뿐만 아니라, 오차 없이 모델링하기가 매우 어렵다. 반면에 오차를 포함하고 있는 3차원 객체 모델을 이용하더라도 모델링 오차에 의해서 계산되는 추적 오차와 실제 사용자의 육안으로 느끼는 추적 오차는 다를 수 있다. 이는 처리비용이 높은 정밀한 모델링 과정을 요구하지 않더라도 사용자가 느끼는 오차 허용 범위 내에서 추적을 위한 객체 모델링을 효과적으로 수행할 수 있기에 중요한 측면이 된다. 따라서 본 논문에서는 모델 기반 카메라 추적에서 모델링 오차에 따른 실제 정합 오차와 사용자의 육안으로 인지되는 정합 오차를 사용자 평가를 통해 비교 분석하고, 3차원 객체 모델링의 허용 오차 범위에 대해 논의한다.

Object tracking algorithm of Swarm Robot System for using Polygon based Q-learning and parallel SVM

  • Seo, Snag-Wook;Yang, Hyun-Chang;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제8권3호
    • /
    • pp.220-224
    • /
    • 2008
  • This paper presents the polygon-based Q-leaning and Parallel SVM algorithm for object search with multiple robots. We organized an experimental environment with one hundred mobile robots, two hundred obstacles, and ten objects. Then we sent the robots to a hallway, where some obstacles were lying about, to search for a hidden object. In experiment, we used four different control methods: a random search, a fusion model with Distance-based action making (DBAM) and Area-based action making (ABAM) process to determine the next action of the robots, and hexagon-based Q-learning, and dodecagon-based Q-learning and parallel SVM algorithm to enhance the fusion model with Distance-based action making (DBAM) and Area-based action making (ABAM) process. In this paper, the result show that dodecagon-based Q-learning and parallel SVM algorithm is better than the other algorithm to tracking for object.

Target identification for visual tracking

  • Lee, Joon-Woong;Yun, Joo-Seop;Kweon, In-So
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1996년도 Proceedings of the Korea Automatic Control Conference, 11th (KACC); Pohang, Korea; 24-26 Oct. 1996
    • /
    • pp.145-148
    • /
    • 1996
  • In moving object tracking based on the visual sensory feedback, a prerequisite is to determine which feature or which object is to be tracked and then the feature or the object identification precedes the tracking. In this paper, we focus on the object identification not image feature identification. The target identification is realized by finding out corresponding line segments to the hypothesized model segments of the target. The key idea is the combination of the Mahalanobis distance with the geometrica relationship between model segments and extracted line segments. We demonstrate the robustness and feasibility of the proposed target identification algorithm by a moving vehicle identification and tracking in the video traffic surveillance system over images of a road scene.

  • PDF

CPU 환경에서의 실시간 동작을 위한 딥러닝 기반 다중 객체 추적 시스템 (Towards Real-time Multi-object Tracking in CPU Environment)

  • 김경훈;허준호;강석주
    • 방송공학회논문지
    • /
    • 제25권2호
    • /
    • pp.192-199
    • /
    • 2020
  • 최근 딥러닝 모델을 기반으로 한 객체 추적 알고리즘의 활용도가 증가하고 있다. 영상에서의 다중 객체의 추적을 위한 시스템은 대표적으로 객체 검출 알고리즘과 객체 추적 알고리즘의 연쇄된 형태로 구성되어있다. 하지만 여러 모듈로 구성된 연쇄 형태의 시스템은 고성능 컴퓨팅 환경을 요구하며 실제 어플리케이션으로의 적용에 제한사항으로 존재한다. 본 논문에서는 위와 같은 객체 검출-추적의 연쇄 형태의 시스템에서 객체 검출 모듈의 연산 관련 프로세스를 조정하여 저성능 컴퓨팅 환경에서도 실시간 동작을 가능하게 하는 방법을 제안한다.

Modified Particle Filtering for Unstable Handheld Camera-Based Object Tracking

  • Lee, Seungwon;Hayes, Monson H.;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제1권2호
    • /
    • pp.78-87
    • /
    • 2012
  • In this paper, we address the tracking problem caused by camera motion and rolling shutter effects associated with CMOS sensors in consumer handheld cameras, such as mobile cameras, digital cameras, and digital camcorders. A modified particle filtering method is proposed for simultaneously tracking objects and compensating for the effects of camera motion. The proposed method uses an elastic registration algorithm (ER) that considers the global affine motion as well as the brightness and contrast between images, assuming that camera motion results in an affine transform of the image between two successive frames. By assuming that the camera motion is modeled globally by an affine transform, only the global affine model instead of the local model was considered. Only the brightness parameter was used in intensity variation. The contrast parameters used in the original ER algorithm were ignored because the change in illumination is small enough between temporally adjacent frames. The proposed particle filtering consists of the following four steps: (i) prediction step, (ii) compensating prediction state error based on camera motion estimation, (iii) update step and (iv) re-sampling step. A larger number of particles are needed when camera motion generates a prediction state error of an object at the prediction step. The proposed method robustly tracks the object of interest by compensating for the prediction state error using the affine motion model estimated from ER. Experimental results show that the proposed method outperforms the conventional particle filter, and can track moving objects robustly in consumer handheld imaging devices.

  • PDF

Multiple Human Recognition for Networked Camera based Interactive Control in IoT Space

  • Jin, Taeseok
    • 한국산업융합학회 논문집
    • /
    • 제22권1호
    • /
    • pp.39-45
    • /
    • 2019
  • We propose an active color model based method for tracking motions of multiple human using a networked multiple-camera system in IoT space as a human-robot coexistent system. An IoT space is a space where many intelligent devices, such as computers and sensors(color CCD cameras for example), are distributed. Human beings can be a part of IoT space as well. One of the main goals of IoT space is to assist humans and to do different services for them. In order to be capable of doing that, IoT space must be able to do different human related tasks. One of them is to identify and track multiple objects seamlessly. In the environment where many camera modules are distributed on network, it is important to identify object in order to track it, because different cameras may be needed as object moves throughout the space and IoT space should determine the appropriate one. This paper describes appearance based unknown object tracking with the distributed vision system in IoT space. First, we discuss how object color information is obtained and how the color appearance based model is constructed from this data. Then, we discuss the global color model based on the local color information. The process of learning within global model and the experimental results are also presented.

Dynamic Tracking Aggregation with Transformers for RGB-T Tracking

  • Xiaohu, Liu;Zhiyong, Lei
    • Journal of Information Processing Systems
    • /
    • 제19권1호
    • /
    • pp.80-88
    • /
    • 2023
  • RGB-thermal (RGB-T) tracking using unmanned aerial vehicles (UAVs) involves challenges with regards to the similarity of objects, occlusion, fast motion, and motion blur, among other issues. In this study, we propose dynamic tracking aggregation (DTA) as a unified framework to perform object detection and data association. The proposed approach obtains fused features based a transformer model and an L1-norm strategy. To link the current frame with recent information, a dynamically updated embedding called dynamic tracking identification (DTID) is used to model the iterative tracking process. For object association, we designed a long short-term tracking aggregation module for dynamic feature propagation to match spatial and temporal embeddings. DTA achieved a highly competitive performance in an experimental evaluation on public benchmark datasets.

Pan/Tilt스테레오 카메라를 이용한 이동 물체의 강건한 시각추적 (Robust 3D visual tracking for moving object using pan/tilt stereo cameras)

  • 조지승;정병묵;최인수;노상현;임윤규
    • 한국정밀공학회지
    • /
    • 제22권9호
    • /
    • pp.77-84
    • /
    • 2005
  • In most vision applications, we are frequently confronted with determining the position of object continuously. Generally, intertwined processes ire needed for target tracking, composed with tracking and control process. Each of these processes can be studied independently. In case of actual implementation we must consider the interaction between them to achieve robust performance. In this paper, the robust real time visual tracking in complex background is considered. A common approach to increase robustness of a tracking system is to use known geometric models (CAD model etc.) or to attach the marker. In case an object has arbitrary shape or it is difficult to attach the marker to object, we present a method to track the target easily as we set up the color and shape for a part of object previously. Robust detection can be achieved by integrating voting-based visual cues. Kalman filter is used to estimate the motion of moving object in 3D space, and this algorithm is tested in a pan/tilt robot system. Experimental results show that fusion of cues and motion estimation in a tracking system has a robust performance.

Subjective Evaluation on Perceptual Tracking Errors from Modeling Errors in Model-Based Tracking

  • Rhee, Eun Joo;Park, Jungsik;Seo, Byung-Kuk;Park, Jong-Il
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제4권6호
    • /
    • pp.407-412
    • /
    • 2015
  • In model-based tracking, an accurate 3D model of a target object or scene is mostly assumed to be known or given in advance, but the accuracy of the model should be guaranteed for accurate pose estimation. In many application domains, on the other hand, end users are not highly distracted by tracking errors from certain levels of modeling errors. In this paper, we examine perceptual tracking errors, which are predominantly caused by modeling errors, on subjective evaluation and compare them to computational tracking errors. We also discuss the tolerance of modeling errors by analyzing their permissible ranges.