• Title/Summary/Keyword: Model based Object Tracking

Search Result 234, Processing Time 0.031 seconds

A Study on Kohenen Network based on Path Determination for Efficient Moving Trajectory on Mobile Robot

  • Jin, Tae-Seok;Tack, HanHo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.10 no.2
    • /
    • pp.101-106
    • /
    • 2010
  • We propose an approach to estimate the real-time moving trajectory of an object in this paper. The object's position is obtained from the image data of a CCD camera, while a state estimator predicts the linear and angular velocities of the moving object. To overcome the uncertainties and noises residing in the input data, a Extended Kalman Filter(EKF) and neural networks are utilized cooperatively. Since the EKF needs to approximate a nonlinear system into a linear model in order to estimate the states, there still exist errors as well as uncertainties. To resolve this problem, in this approach the Kohonen networks, which have a high adaptability to the memory of the inputoutput relationship, are utilized for the nonlinear region. In addition to this, the Kohonen network, as a sort of neural network, can effectively adapt to the dynamic variations and become robust against noises. This approach is derived from the observation that the Kohonen network is a type of self-organized map and is spatially oriented, which makes it suitable for determining the trajectories of moving objects. The superiority of the proposed algorithm compared with the EKF is demonstrated through real experiments.

Control of an Omni-directional Mobile Robot Based on Camera Image (카메라 영상기반 전방향 이동 로봇의 제어)

  • Kim, Bong Kyu;Ryoo, Jung Rae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.84-89
    • /
    • 2014
  • In this paper, an image-based visual servo control strategy for tracking a target object is applied to a camera-mounted omni-directional mobile robot. In order to get target angular velocity of each wheel from image coordinates of the target object, in general, a mathematical image Jacobian matrix is built using a camera model and a mobile robot kinematics. Unlike to the well-known mathematical image Jacobian, a simple rule-based control strategy is proposed to generate target angular velocities of the wheels in conjunction with size of the target object captured in a camera image. A camera image is divided into several regions, and a pre-defined rule corresponding to the target-located image region is applied to generate target angular velocities of wheels. The proposed algorithm is easily implementable in that no mathematical description for image Jacobian is required and a small number of rules are sufficient for target tracking. Experimental results are presented with descriptions about the overall experimental system.

Reliable Camera Pose Estimation from a Single Frame with Applications for Virtual Object Insertion (가상 객체 합성을 위한 단일 프레임에서의 안정된 카메라 자세 추정)

  • Park, Jong-Seung;Lee, Bum-Jong
    • The KIPS Transactions:PartB
    • /
    • v.13B no.5 s.108
    • /
    • pp.499-506
    • /
    • 2006
  • This Paper describes a fast and stable camera pose estimation method for real-time augmented reality systems. From the feature tracking results of a marker on a single frame, we estimate the camera rotation matrix and the translation vector. For the camera pose estimation, we use the shape factorization method based on the scaled orthographic Projection model. In the scaled orthographic factorization method, all feature points of an object are assumed roughly at the same distance from the camera, which means the selected reference point and the object shape affect the accuracy of the estimation. This paper proposes a flexible and stable selection method for the reference point. Based on the proposed method, we implemented a video augmentation system that inserts virtual 3D objects into the input video frames. Experimental results showed that the proposed camera pose estimation method is fast and robust relative to the previous methods and it is applicable to various augmented reality applications.

EVALUATING AND EXTENDING SPATIO-TEMPORAL DATABASE FUNCTIONALITIES FOR MOVING OBJECTS

  • Dodge Somayeh;Alesheikh Ali A.
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.778-784
    • /
    • 2005
  • Miniaturization of computing devices, and advances in wireless communication and positioning systems will create a wide and increasing range of database applications such as location-based services, tracking and transportation systems that has to deal with Moving Objects. Various types of queries could be posted to moving objects, including past, present and future queries. The key problem is how to model the location of moving objects and enable Database Management System (DBMS) to predict the future location of a moving object. It is obvious that there is a need for an innovative, generic, conceptually clean and application-independent approach for spatio-temporal handling data. This paper presents behavioral aspect of the spatio-temporal databases for managing and querying moving objects. Our objective is to impelement and extend the Spatial TAU (STAU) system developed by Dr.Pelekis that provides spatio-temporal functionality to an Object-Relational Database Management System to support modeling and querying moving objecs. The results of the impelementation are demonstrated in this paper.

  • PDF

Real-Time Face Tracking Algorithm Robust to illumination Variations (조명 변화에 강인한 실시간 얼굴 추적 알고리즘)

  • Lee, Yong-Beom;You, Bum-Jae;Lee, Seong-Whan;Kim, Kwang-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.3037-3040
    • /
    • 2000
  • Real-Time object tracking has emerged as an important component in several application areas including machine vision. surveillance. Human-Computer Interaction. image-based control. and so on. And there has been developed various algorithms for a long time. But in many cases. they have showed limited results under uncontrolled situation such as illumination changes or cluttered background. In this paper. we present a novel. computationally efficient algorithm for tracking human face robustly under illumination changes and cluttered backgrounds. Previous algorithms usually defines color model as a 2D membership function in a color space without consideration for illumination changes. Our new algorithm developed here. however. constructs a 3D color model by analysing plenty of images acquired under various illumination conditions. The algorithm described is applied to a mobile head-eye robot and experimented under various uncontrolled environments. It can track an human face more than 100 frames per second excluding image acquisition time.

  • PDF

Real-Time Eye Detection and Tracking Under Various Light Conditions (다양한 조명하에서 실시간 눈 검출 및 추적)

  • 박호식;박동희;남기환;한준희;나상동;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.10a
    • /
    • pp.227-232
    • /
    • 2003
  • Non-intrusive methods based on active remote IR illumination for eye tracking is important for many applications of vision-based man-machine interaction. One problem that has plagued those methods is their sensitivity to lighting condition change. This tends to significantly limit their scope of application. In this paper, we present a new real-time eye detection and tracking methodology that works under variable and realistic lighting conditions. eased on combining the bright-pupil effect resulted from IR light and the conventional appearance-based object recognition technique, our method can robustly track eyes when the pupils are not very bright due to significant external illumination interferences. The appearance model is incorporated in both eyes detection and tracking via the use of support vector machine and the mean shift tracking. Additional improvement is achieved from modifying the image acquisition apparatus including the illuminator and the camera.

  • PDF

Pointing Accuracy Analysis of Space Object Laser Tracking System at Geochang Observatory (거창 우주물체 레이저 추적 시스템의 추적마운트 지향 정밀도 분석)

  • Sung, Ki-Pyoung;Lim, Hyung-Chul;Park, Jong-Uk;Choi, Man-Soo;Yu, Sung-Yeol;Park, Eun-Seo;Ryou, Jae-Cheol
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.49 no.11
    • /
    • pp.953-960
    • /
    • 2021
  • Korea Astronomy and Space Science Institute has been verifying the multipurpose laser tracking system with three functions of satellite laser tracking, adaptive optics and space debris laser tracking for not only scientific research but also national space missions. The system employs an optical telescope consisting of a 100 cm primary mirror and an altazimuth mount for fast and precise tracking. The precise pointing and tracking capability in a tracking mount is considered as one of important performance metrics in the fields of automatic tracking and precise application research. So it is required to analyze a mount model for investigating pointing error factors and compensating pointing error. In this study, we investigated various factors causing static pointing errors of tracking mount and analyzed the pointing accuracy of the tracking mount at Geochang observatory by estimating mount parameters based on the least square method.

Tracking of Continuously Acting Hearts Using a Geometric Active Contour Model (기하 활성 모델을 이용한 연속적 심장 운동 추적)

  • 김성곤
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.3 no.4
    • /
    • pp.17-22
    • /
    • 2002
  • This paper used an active contour model which was based on level set algorithms and bidirectional curve evolution theory in order to track the shape of the heart acting continuously. Most active contour models would be failed in boundary extraction because of their unstable movement in the edge gap locations. In this paper, we suggest a new active contour model using only image intensity value and additional constraint needed for stable extraction. Our model was successfully run on either shape extraction or object tracking without any position constraints of initial curve. Also demonstrated stable movements and showed good results at weak or missing boundary locations.

  • PDF

Moving Object Trajectory based on Kohenen Network for Efficient Navigation of Mobile Robot

  • Jin, Tae-Seok
    • Journal of information and communication convergence engineering
    • /
    • v.7 no.2
    • /
    • pp.119-124
    • /
    • 2009
  • In this paper, we propose a novel approach to estimating the real-time moving trajectory of an object is proposed in this paper. The object's position is obtained from the image data of a CCD camera, while a state estimator predicts the linear and angular velocities of the moving object. To overcome the uncertainties and noises residing in the input data, a Extended Kalman Filter(EKF) and neural networks are utilized cooperatively. Since the EKF needs to approximate a nonlinear system into a linear model in order to estimate the states, there still exist errors as well as uncertainties. To resolve this problem, in this approach the Kohonen networks, which have a high adaptability to the memory of the input-output relationship, are utilized for the nonlinear region. In addition to this, the Kohonen network, as a sort of neural network, can effectively adapt to the dynamic variations and become robust against noises. This approach is derived from the observation that the Kohonen network is a type of self-organized map and is spatially oriented, which makes it suitable for determining the trajectories of moving objects. The superiority of the proposed algorithm compared with the EKF is demonstrated through real experiments.

An Effective Moving Cast Shadow Removal in Gray Level Video for Intelligent Visual Surveillance (지능 영상 감시를 위한 흑백 영상 데이터에서의 효과적인 이동 투영 음영 제거)

  • Nguyen, Thanh Binh;Chung, Sun-Tae;Cho, Seongwon
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.4
    • /
    • pp.420-432
    • /
    • 2014
  • In detection of moving objects from video sequences, an essential process for intelligent visual surveillance, the cast shadows accompanying moving objects are different from background so that they may be easily extracted as foreground object blobs, which causes errors in localization, segmentation, tracking and classification of objects. Most of the previous research results about moving cast shadow detection and removal usually utilize color information about objects and scenes. In this paper, we proposes a novel cast shadow removal method of moving objects in gray level video data for visual surveillance application. The proposed method utilizes observations about edge patterns in the shadow region in the current frame and the corresponding region in the background scene, and applies Laplacian edge detector to the blob regions in the current frame and the corresponding regions in the background scene. Then, the product of the outcomes of application determines moving object blob pixels from the blob pixels in the foreground mask. The minimal rectangle regions containing all blob pixles classified as moving object pixels are extracted. The proposed method is simple but turns out practically very effective for Adative Gaussian Mixture Model-based object detection of intelligent visual surveillance applications, which is verified through experiments.