• Title/Summary/Keyword: 비주얼 오도메트리

Search Result 6, Processing Time 0.02 seconds

Localization of a Tracked Robot Based on Fuzzy Fusion of Wheel Odometry and Visual Odometry in Indoor and Outdoor Environments (실내외 환경에서 휠 오도메트리와 비주얼 오도메트리 정보의 퍼지 융합에 기반한 궤도로봇의 위치추정)

  • Ham, Hyeong-Ha;Hong, Sung-Ho;Song, Jae-Bok;Baek, Joo-Hyun;Ryu, Jae-Kwan
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.36 no.6
    • /
    • pp.629-635
    • /
    • 2012
  • Tracked robots usually have poor localization performance because of slippage of their tracks. This study proposes a new localization method for tracked robots that uses fuzzy fusion of stereo-camera-based visual odometry and encoder-based wheel odometry. Visual odometry can be inaccurate when an insufficient number of visual features are available, while the encoder is prone to accumulating errors when large slips occur. To combine these two methods, the weight of each method was controlled by a fuzzy decision depending on the surrounding environment. The experimental results show that the proposed scheme improved the localization performance of a tracked robot.

RGB-VO: Visual Odometry using mono RGB (단일 RGB 영상을 이용한 비주얼 오도메트리)

  • Lee, Joosung;Hwang, Sangwon;Kim, Woo Jin;Lee, Sangyoun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.05a
    • /
    • pp.454-456
    • /
    • 2018
  • 주율 주행과 로봇 시스템의 기술이 발전하면서 이와 관련된 영상 알고리즘들의 연구가 활발히 진행되고 있다. 제안 네트워크는 단일 영상을 이용하여 비주얼 오도메트리를 예측하는 시스템이다. 딥러닝 네트워크로 KITTI 데이터 세트를 이용하여 학습과 평가를 하며 네트워크의 입력으로는 연속된 두 개의 프레임이 들어가고 출력으로는 두 프레임간 카메라의 회전과 이동 정보가 된다. 이를 통하여 대표적으로 자동차의 주행 경로를 알 수 있으며 여러 로봇 시스템 등에서 활용할 수 있다.

Stereo Semi-direct Visual Odometry with Adaptive Motion Prior Weights of Lunar Exploration Rover (달 탐사 로버의 적응형 움직임 가중치에 따른 스테레오 준직접방식 비주얼 오도메트리)

  • Jung, Jae Hyung;Heo, Se Jong;Park, Chan Gook
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.46 no.6
    • /
    • pp.479-486
    • /
    • 2018
  • In order to ensure reliable navigation performance of a lunar exploration rover, navigation algorithms using additional sensors such as inertial measurement units and cameras are essential on lunar surface in the absence of a global navigation satellite system. Unprecedentedly, Visual Odometry (VO) using a stereo camera has been successfully implemented at the US Mars rovers. In this paper, we estimate the 6-DOF pose of the lunar exploration rover from gray images of a lunar-like terrains. The proposed algorithm estimates relative pose of consecutive images by sparse image alignment based semi-direct VO. In order to overcome vulnerability to non-linearity of direct VO, we add adaptive motion prior weights calculated from a linear function of the previous pose to the optimization cost function. The proposed algorithm is verified in lunar-like terrain dataset recorded by Toronto University reflecting the characteristics of the actual lunar environment.

Survey on Visual Navigation Technology for Unmanned Systems (무인 시스템의 자율 주행을 위한 영상기반 항법기술 동향)

  • Kim, Hyoun-Jin;Seo, Hoseong;Kim, Pyojin;Lee, Chung-Keun
    • Journal of Advanced Navigation Technology
    • /
    • v.19 no.2
    • /
    • pp.133-139
    • /
    • 2015
  • This paper surveys vision based autonomous navigation technologies for unmanned systems. Main branches of visual navigation technologies are visual servoing, visual odometry, and visual simultaneous localization and mapping (SLAM). Visual servoing provides velocity input which guides mobile system to desired pose. This input velocity is calculated from feature difference between desired image and acquired image. Visual odometry is the technology that estimates the relative pose between frames of consecutive image. This can improve the accuracy when compared with the exisiting dead-reckoning methods. Visual SLAM aims for constructing map of unknown environment and determining mobile system's location simultaneously, which is essential for operation of unmanned systems in unknown environments. The trend of visual navigation is grasped by examining foreign research cases related to visual navigation technology.

Analysis of Applicability of Visual SLAM for Indoor Positioning in the Building Construction Site (Visual SLAM의 건설현장 실내 측위 활용성 분석)

  • Kim, Taejin;Park, Jiwon;Lee, Byoungmin;Bae, Kangmin;Yoon, Sebeen;Kim, Taehoon
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2022.11a
    • /
    • pp.47-48
    • /
    • 2022
  • The positioning technology that measures the position of a person or object is a key technology to deal with the location of the real coordinate system or converge the real and virtual worlds, such as digital twins, augmented reality, virtual reality, and autonomous driving. In estimating the location of a person or object at an indoor construction site, there are restrictions that it is impossible to receive location information from the outside, the communication infrastructure is insufficient, and it is difficult to install additional devices. Therefore, this study tested the direct sparse odometry algorithm, one of the visual Simultaneous Localization and Mapping (vSLAM) that estimate the current location and surrounding map using only image information, at an indoor construction site and analyzed its applicability as an indoor positioning technology. As a result, it was found that it is possible to properly estimate the surrounding map and the current location even in the indoor construction site, which has relatively few feature points. The results of this study can be used as reference data for researchers related to indoor positioning technology for construction sites in the future.

  • PDF

Stereo Vision-based Visual Odometry Using Robust Visual Feature in Dynamic Environment (동적 환경에서 강인한 영상특징을 이용한 스테레오 비전 기반의 비주얼 오도메트리)

  • Jung, Sang-Jun;Song, Jae-Bok;Kang, Sin-Cheon
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.4
    • /
    • pp.263-269
    • /
    • 2008
  • Visual odometry is a popular approach to estimating robot motion using a monocular or stereo camera. This paper proposes a novel visual odometry scheme using a stereo camera for robust estimation of a 6 DOF motion in the dynamic environment. The false results of feature matching and the uncertainty of depth information provided by the camera can generate the outliers which deteriorate the estimation. The outliers are removed by analyzing the magnitude histogram of the motion vector of the corresponding features and the RANSAC algorithm. The features extracted from a dynamic object such as a human also makes the motion estimation inaccurate. To eliminate the effect of a dynamic object, several candidates of dynamic objects are generated by clustering the 3D position of features and each candidate is checked based on the standard deviation of features on whether it is a real dynamic object or not. The accuracy and practicality of the proposed scheme are verified by several experiments and comparisons with both IMU and wheel-based odometry. It is shown that the proposed scheme works well when wheel slip occurs or dynamic objects exist.

  • PDF