• Title/Summary/Keyword: Target Position Estimation

Search Result 136, Processing Time 0.029 seconds

Robust 3D visual tracking for moving object using pan/tilt stereo cameras (Pan/Tilt스테레오 카메라를 이용한 이동 물체의 강건한 시각추적)

  • Cho, Che-Seung;Chung, Byeong-Mook;Choi, In-Su;Nho, Sang-Hyun;Lim, Yoon-Kyu
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.22 no.9 s.174
    • /
    • pp.77-84
    • /
    • 2005
  • In most vision applications, we are frequently confronted with determining the position of object continuously. Generally, intertwined processes ire needed for target tracking, composed with tracking and control process. Each of these processes can be studied independently. In case of actual implementation we must consider the interaction between them to achieve robust performance. In this paper, the robust real time visual tracking in complex background is considered. A common approach to increase robustness of a tracking system is to use known geometric models (CAD model etc.) or to attach the marker. In case an object has arbitrary shape or it is difficult to attach the marker to object, we present a method to track the target easily as we set up the color and shape for a part of object previously. Robust detection can be achieved by integrating voting-based visual cues. Kalman filter is used to estimate the motion of moving object in 3D space, and this algorithm is tested in a pan/tilt robot system. Experimental results show that fusion of cues and motion estimation in a tracking system has a robust performance.

A Study on the Robot Vision Control Schemes of N-R and EKF Methods for Tracking the Moving Targets (이동 타겟 추적을 위한 N-R과 EKF방법의 로봇비젼제어기법에 관한 연구)

  • Hong, Sung-Mun;Jang, Wan-Shik;Kim, Jae-Meung
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.23 no.5
    • /
    • pp.485-497
    • /
    • 2014
  • This paper presents the robot vision control schemes based on the Newton-Raphson (N-R) and the Extended Kalman Filter (EKF) methods for the tracking of moving targets. The vision system model used in this study involves the six camera parameters. The difference is that refers to the uncertainty of the camera's orientation and focal length, and refers to the unknown relative position between the camera and the robot. Both N-R and EKF methods are employed towards the estimation of the six camera parameters. Based on the these six parameters estimated using three cameras, the robot's joint angles are computed with respect to the moving targets, using both N-R and EKF methods. The two robot vision control schemes are tested by tracking the moving target experimentally. Given the experimental results, the two robot control schemes are compared in order to evaluate their strengths and weaknesses.

Stereo Vision Based 3-D Motion Tracking for Human Animation

  • Han, Seung-Il;Kang, Rae-Won;Lee, Sang-Jun;Ju, Woo-Suk;Lee, Joan-Jae
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.6
    • /
    • pp.716-725
    • /
    • 2007
  • In this paper we describe a motion tracking algorithm for 3D human animation using stereo vision system. This allows us to extract the motion data of the end effectors of human body by following the movement through segmentation process in HIS or RGB color model, and then blob analysis is used to detect robust shape. When two hands or two foots are crossed at any position and become disjointed, an adaptive algorithm is presented to recognize whether it is left or right one. And the real motion is the 3-D coordinate motion. A mono image data is a data of 2D coordinate. This data doesn't acquire distance from a camera. By stereo vision like human vision, we can acquire a data of 3D motion such as left, right motion from bottom and distance of objects from camera. This requests a depth value including x axis and y axis coordinate in mono image for transforming 3D coordinate. This depth value(z axis) is calculated by disparity of stereo vision by using only end-effectors of images. The position of the inner joints is calculated and 3D character can be visualized using inverse kinematics.

  • PDF

Moving Object Following by a Mobile Robot using a Single Curvature Trajectory and Kalman Filters (단일곡률궤적과 칼만필터를 이용한 이동로봇의 동적물체 추종)

  • Lim, Hyun-Seop;Lee, Dong-Hyuk;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.7
    • /
    • pp.599-604
    • /
    • 2013
  • Path planning of mobile robots has a purpose to design an optimal path from an initial position to a target point. Minimum driving time, minimum driving distance and minimum driving error might be considered in choosing the optimal path and are correlated to each other. In this paper, an efficient driving trajectory is planned in a real situation where a mobile robot follows a moving object. Position and distance of the moving object are obtained using a web camera, and the rotation angular and linear velocities are estimated using Kalman filters to predict the trajectory of the moving object. Finally, the mobile robot follows the moving object using a single curvature trajectory by estimating the trajectory of the moving object. Using the estimation by Kalman filters and the single curvature in the trajectory planning, the total tracking distance and time saved amounts to about 7%. The effectiveness of the proposed algorithm has been verified through real tracking experiments.

Trajectory Generation of a Moving Object for a Mobile Robot in Predictable Environment

  • Jin, Tae-Seok;Lee, Jang-Myung
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.5 no.1
    • /
    • pp.27-35
    • /
    • 2004
  • In the field of machine vision using a single camera mounted on a mobile robot, although the detection and tracking of moving objects from a moving observer, is complex and computationally demanding task. In this paper, we propose a new scheme for a mobile robot to track and capture a moving object using images of a camera. The system consists of the following modules: data acquisition, feature extraction and visual tracking, and trajectory generation. And a single camera is used as visual sensors to capture image sequences of a moving object. The moving object is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the active camera. Uncertainties in the position estimation caused by the point-object assumption are compensated using the Kalman filter. To generate the shortest time trajectory to capture the moving object, the linear and angular velocities are estimated and utilized. The experimental results of tracking and capturing of the target object with the mobile robot are presented.

Systematic Error Correction of Sea Surveillance Radar using AtoN Information (항로표지 정보를 이용한 해상감시레이더의 시스템 오차 보정)

  • Kim, Byung-Doo;Kim, Do-Hyeung;Lee, Byung-Gil
    • Journal of Navigation and Port Research
    • /
    • v.37 no.5
    • /
    • pp.447-452
    • /
    • 2013
  • Vessel traffic system uses multiple sea surveillance radars as a primary sensor to obtain maritime traffic information like as ship's position, speed, course. The systematic errors such as the range bias and the azimuth bias of the two-dimensional radar system can significantly degrade the accuracy of the radar image and target tracking information. Therefore, the systematic errors of the radar system should be corrected precisely in order to provide the accurate target information in the vessel traffic system. In this paper, it is proposed that the method compensates the range bias and the azimuth bias using AtoN information installed at VTS coverage. The radar measurement residual error model is derived from the standard error model of two-dimensional radar measurements and the position information of AtoN, and then the linear Kalman filter is designed for estimation of the systematic errors of the radar system. The proposed method is validated via Monte-Carlo runs. Also, the convergence characteristics of the designed filter and the accuracy of the systematic error estimates according to the number of AtoN information are analyzed.

Semi-automatic Camera Calibration Using Quaternions (쿼터니언을 이용한 반자동 카메라 캘리브레이션)

  • Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.2
    • /
    • pp.43-50
    • /
    • 2018
  • The camera is a key element in image-based three-dimensional positioning, and camera calibration, which properly determines the internal characteristics of such a camera, is a necessary process that must be preceded in order to determine the three-dimensional coordinates of the object. In this study, a new methodology was proposed to determine interior orientation parameters of a camera semi-automatically without being influenced by size and shape of checkerboard for camera calibration. The proposed method consists of exterior orientation parameters estimation using quaternion, recognition of calibration target, and interior orientation parameter determination through bundle block adjustment. After determining the interior orientation parameters using the chessboard calibration target, the three-dimensional position of the small 3D model was determined. In addition, the horizontal and vertical position errors were about ${\pm}0.006m$ and ${\pm}0.007m$, respectively, through the accuracy evaluation using the checkpoints.

Visibility detection approach to road scene foggy images

  • Guo, Fan;Peng, Hui;Tang, Jin;Zou, Beiji;Tang, Chenggong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.9
    • /
    • pp.4419-4441
    • /
    • 2016
  • A cause of vehicle accidents is the reduced visibility due to bad weather conditions such as fog. Therefore, an onboard vision system should take visibility detection into account. In this paper, we propose a simple and effective approach for measuring the visibility distance using a single camera placed onboard a moving vehicle. The proposed algorithm is controlled by a few parameters and mainly includes camera parameter estimation, region of interest (ROI) estimation and visibility computation. Thanks to the ROI extraction, the position of the inflection point may be measured in practice. Thus, combined with the estimated camera parameters, the visibility distance of the input foggy image can be computed with a single camera and just the presence of road and sky in the scene. To assess the accuracy of the proposed approach, a reference target based visibility detection method is also introduced. The comparative study and quantitative evaluation show that the proposed method can obtain good visibility detection results with relatively fast speed.

Experiments on Robust Nonlinear Control for Brush Contact Force Estimation (연마 브러시 접촉력 산출을 위한 비선형 강건제어기 실험)

  • Lee, Byoung-Soo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.27 no.3
    • /
    • pp.41-49
    • /
    • 2010
  • Two promising control candidates have been selected to test the sinusoidal reference tracking performance for a brush-type polishing machine having strong nonlinearities and disturbances. The controlled target system is an oscillating mechanism consisting of a common positioning stage of one degree-of-freedom with a screw and a ball nut driven by a servo motor those can be obtained commercially. Beside the strong nonlinearity such as stick-slip friction, the periodic contact of the polishing brush and the work piece adds an external disturbance. Selected control candidates are a Sliding Mode Control (SMC) and a variant of a feedback linearization control called Smooth Robust Nonlinear Control (SRNC). A SMC and SRNC are selected since they have good theoretical backgrounds, are suitable to be implemented in a digital environment and show good disturbance and modeling uncertainty rejection performance. It should be also noted that SRNC has a nobel approach in that it uses the position information to compensate the stickslip friction. For both controllers analytical and experimental studies have been conducted to show control design approaches and to compare the performance against the strong nonlinearity and the disturbances.

Indoor Mobile Localization System and Stabilization of Localization Performance using Pre-filtering

  • Ko, Sang-Il;Choi, Jong-Suk;Kim, Byoung-Hoon
    • International Journal of Control, Automation, and Systems
    • /
    • v.6 no.2
    • /
    • pp.204-213
    • /
    • 2008
  • In this paper, we present the practical application of an Unscented Kalman Filter (UKF) for an Indoor Mobile Localization System using ultrasonic sensors. It is true that many kinds of localization techniques have been researched for several years in order to contribute to the realization of a ubiquitous system; particularly, such a ubiquitous system needs a high degree of accuracy to be practical and efficient. Unfortunately, a number of localization systems for indoor space do not have sufficient accuracy to establish any special task such as precise position control of a moving target even though they require comparatively high developmental cost. Therefore, we developed an Indoor Mobile Localization System having high localization performance; specifically, the Unscented Kalman Filter is applied for improving the localization accuracy. In addition, we also present the additive filter named 'Pre-filtering' to compensate the performance of the estimation algorithm. Pre-filtering has been developed to overcome negative effects from unexpected external noise so that localization through the Unscented Kalman Filter has come to be stable. Moreover, we tried to demonstrate the performance comparison of the Unscented Kalman Filter and another estimation algorithm, such as the Unscented Particle Filter (UPF), through simulation for our system.