• Title/Summary/Keyword: Odometry

Search Result 95, Processing Time 0.037 seconds

Advanced Relative Localization Algorithm Robust to Systematic Odometry Errors (주행거리계의 기구적 오차에 강인한 개선된 상대 위치추정 알고리즘)

  • Ra, Won-Sang;Whang, Ick-Ho;Lee, Hye-Jin;Park, Jin-Bae;Yoon, Tae-Sung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.9
    • /
    • pp.931-938
    • /
    • 2008
  • In this paper, a novel localization algorithm robust to the unmodeled systematic odometry errors is proposed for low-cost non-holonomic mobile robots. It is well known that the most pose estimators using odometry measurements cannot avoid the performance degradation due to the dead-reckoning of systematic odometry errors. As a remedy for this problem, we tty to reflect the wheelbase error in the robot motion model as a parametric uncertainty. Applying the Krein space estimation theory for the discrete-time uncertain nonlinear motion model results in the extended robust Kalman filter. This idea comes from the fact that systematic odometry errors might be regarded as the parametric uncertainties satisfying the sum quadratic constrains (SQCs). The advantage of the proposed methodology is that it has the same recursive structure as the conventional extended Kalman filter, which makes our scheme suitable for real-time applications. Moreover, it guarantees the satisfactoty localization performance even in the presence of wheelbase uncertainty which is hard to model or estimate but often arises from real driving environments. The computer simulations will be given to demonstrate the robustness of the suggested localization algorithm.

Accurate Calibration of Kinematic Parameters for Two Wheel Differential Drive Robots by Considering the Coupled Effect of Error Sources (이륜차동구동형로봇의 복합오차를 고려한 기구학적 파라미터 정밀보정기법)

  • Lee, Kooktae;Jung, Changbae;Jung, Daun;Chung, Woojin
    • The Journal of Korea Robotics Society
    • /
    • v.9 no.1
    • /
    • pp.39-47
    • /
    • 2014
  • Odometry using wheel encoders is one of the fundamental techniques for the pose estimation of wheeled mobile robots. However, odometry has a drawback that the position errors are accumulated when the travel distance increases. Therefore, position errors are required to be reduced using appropriate calibration schemes. The UMBmark method is the one of the widely used calibration schemes for two wheel differential drive robots. In UMBmark method, it is assumed that odometry error sources are independent. However, there is coupled effect of odometry error sources. In this paper, a new calibration scheme by considering the coupled effect of error sources is proposed. We also propose the test track design for the proposed calibration scheme. The numerical simulation and experimental results show that the odometry accuracy can be improved by the proposed calibration scheme.

DiLO: Direct light detection and ranging odometry based on spherical range images for autonomous driving

  • Han, Seung-Jun;Kang, Jungyu;Min, Kyoung-Wook;Choi, Jungdan
    • ETRI Journal
    • /
    • v.43 no.4
    • /
    • pp.603-616
    • /
    • 2021
  • Over the last few years, autonomous vehicles have progressed very rapidly. The odometry technique that estimates displacement from consecutive sensor inputs is an essential technique for autonomous driving. In this article, we propose a fast, robust, and accurate odometry technique. The proposed technique is light detection and ranging (LiDAR)-based direct odometry, which uses a spherical range image (SRI) that projects a three-dimensional point cloud onto a two-dimensional spherical image plane. Direct odometry is developed in a vision-based method, and a fast execution speed can be expected. However, applying LiDAR data is difficult because of the sparsity. To solve this problem, we propose an SRI generation method and mathematical analysis, two key point sampling methods using SRI to increase precision and robustness, and a fast optimization method. The proposed technique was tested with the KITTI dataset and real environments. Evaluation results yielded a translation error of 0.69%, a rotation error of 0.0031°/m in the KITTI training dataset, and an execution time of 17 ms. The results demonstrated high precision comparable with state-of-the-art and remarkably higher speed than conventional techniques.

Robust Real-Time Visual Odometry Estimation for 3D Scene Reconstruction (3차원 장면 복원을 위한 강건한 실시간 시각 주행 거리 측정)

  • Kim, Joo-Hee;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.4
    • /
    • pp.187-194
    • /
    • 2015
  • In this paper, we present an effective visual odometry estimation system to track the real-time pose of a camera moving in 3D space. In order to meet the real-time requirement as well as to make full use of rich information from color and depth images, our system adopts a feature-based sparse odometry estimation method. After matching features extracted from across image frames, it repeats both the additional inlier set refinement and the motion refinement to get more accurate estimate of camera odometry. Moreover, even when the remaining inlier set is not sufficient, our system computes the final odometry estimate in proportion to the size of the inlier set, which improves the tracking success rate greatly. Through experiments with TUM benchmark datasets and implementation of the 3D scene reconstruction application, we confirmed the high performance of the proposed visual odometry estimation method.

Stereo Vision-based Visual Odometry Using Robust Visual Feature in Dynamic Environment (동적 환경에서 강인한 영상특징을 이용한 스테레오 비전 기반의 비주얼 오도메트리)

  • Jung, Sang-Jun;Song, Jae-Bok;Kang, Sin-Cheon
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.4
    • /
    • pp.263-269
    • /
    • 2008
  • Visual odometry is a popular approach to estimating robot motion using a monocular or stereo camera. This paper proposes a novel visual odometry scheme using a stereo camera for robust estimation of a 6 DOF motion in the dynamic environment. The false results of feature matching and the uncertainty of depth information provided by the camera can generate the outliers which deteriorate the estimation. The outliers are removed by analyzing the magnitude histogram of the motion vector of the corresponding features and the RANSAC algorithm. The features extracted from a dynamic object such as a human also makes the motion estimation inaccurate. To eliminate the effect of a dynamic object, several candidates of dynamic objects are generated by clustering the 3D position of features and each candidate is checked based on the standard deviation of features on whether it is a real dynamic object or not. The accuracy and practicality of the proposed scheme are verified by several experiments and comparisons with both IMU and wheel-based odometry. It is shown that the proposed scheme works well when wheel slip occurs or dynamic objects exist.

  • PDF

Robust Visual Odometry System for Illumination Variations Using Adaptive Thresholding (적응적 이진화를 이용하여 빛의 변화에 강인한 영상거리계를 통한 위치 추정)

  • Hwang, Yo-Seop;Yu, Ho-Yun;Lee, Jangmyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.9
    • /
    • pp.738-744
    • /
    • 2016
  • In this paper, a robust visual odometry system has been proposed and implemented in an environment with dynamic illumination. Visual odometry is based on stereo images to estimate the distance to an object. It is very difficult to realize a highly accurate and stable estimation because image quality is highly dependent on the illumination, which is a major disadvantage of visual odometry. Therefore, in order to solve the problem of low performance during the feature detection phase that is caused by illumination variations, it is suggested to determine an optimal threshold value in the image binarization and to use an adaptive threshold value for feature detection. A feature point direction and a magnitude of the motion vector that is not uniform are utilized as the features. The performance of feature detection has been improved by the RANSAC algorithm. As a result, the position of a mobile robot has been estimated using the feature points. The experimental results demonstrated that the proposed approach has superior performance against illumination variations.

Stereo Visual Odometry without Relying on RANSAC for the Measurement of Vehicle Motion (차량의 모션계측을 위한 RANSAC 의존 없는 스테레오 영상 거리계)

  • Song, Gwang-Yul;Lee, Joon-Woong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.4
    • /
    • pp.321-329
    • /
    • 2015
  • This paper addresses a new algorithm for a stereo visual odometry to measure the ego-motion of a vehicle. The new algorithm introduces an inlier grouping method based on Delaunay triangulation and vanishing point computation. Most visual odometry algorithms rely on RANSAC in choosing inliers. Those algorithms fluctuate largely in processing time between images and have different accuracy depending on the iteration number and the level of outliers. On the other hand, the new approach reduces the fluctuation in the processing time while providing accuracy corresponding to the RANSAC-based approaches.

A Correction System of Odometry Error for Map Building of Mobile Robot Based on Sensor fusion

  • Hyun, Woong-Keun
    • Journal of information and communication convergence engineering
    • /
    • v.8 no.6
    • /
    • pp.709-715
    • /
    • 2010
  • This paper represents a map building and localization system for mobile robot. Map building and navigation is a complex problem because map integrity cannot be sustained by odometry alone due to errors introduced by wheel slippage, distortion and simple linealized odometry equation. For accurate localization, we propose sensor fusion system using encoder sensor and indoor GPS module as relative sensor and absolute sensor, respectively. To build a map, we developed a sensor based navigation algorithm and grid based map building algorithm based on Embedded Linux O.S. A wall following decision engine like an expert system was proposed for map building navigation. We proved this system's validity through field test.

Development of Visual Odometry Estimation for an Underwater Robot Navigation System

  • Wongsuwan, Kandith;Sukvichai, Kanjanapan
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.4
    • /
    • pp.216-223
    • /
    • 2015
  • The autonomous underwater vehicle (AUV) is being widely researched in order to achieve superior performance when working in hazardous environments. This research focuses on using image processing techniques to estimate the AUV's egomotion and the changes in orientation, based on image frames from different time frames captured from a single high-definition web camera attached to the bottom of the AUV. A visual odometry application is integrated with other sensors. An internal measurement unit (IMU) sensor is used to determine a correct set of answers corresponding to a homography motion equation. A pressure sensor is used to resolve image scale ambiguity. Uncertainty estimation is computed to correct drift that occurs in the system by using a Jacobian method, singular value decomposition, and backward and forward error propagation.