• Title/Summary/Keyword: visual odometry (VO)

Search Result 6, Processing Time 0.026 seconds

Integrated Navigation Algorithm using Velocity Incremental Vector Approach with ORB-SLAM and Inertial Measurement (속도증분벡터를 활용한 ORB-SLAM 및 관성항법 결합 알고리즘 연구)

  • Kim, Yeonjo;Son, Hyunjin;Lee, Young Jae;Sung, Sangkyung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.68 no.1
    • /
    • pp.189-198
    • /
    • 2019
  • In recent years, visual-inertial odometry(VIO) algorithms have been extensively studied for the indoor/urban environments because it is more robust to dynamic scenes and environment changes. In this paper, we propose loosely coupled(LC) VIO algorithm that utilizes the velocity vectors from both visual odometry(VO) and inertial measurement unit(IMU) as a filter measurement of Extended Kalman filter. Our approach improves the estimation performance of a filter without adding extra sensors while maintaining simple integration framework, which treats VO as a black box. For the VO algorithm, we employed a fundamental part of the ORB-SLAM, which uses ORB features. We performed an outdoor experiment using an RGB-D camera to evaluate the accuracy of the presented algorithm. Also, we evaluated our algorithm with the public dataset to compare with other visual navigation systems.

Benchmark for Deep Learning based Visual Odometry and Monocular Depth Estimation (딥러닝 기반 영상 주행기록계와 단안 깊이 추정 및 기술을 위한 벤치마크)

  • Choi, Hyukdoo
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.114-121
    • /
    • 2019
  • This paper presents a new benchmark system for visual odometry (VO) and monocular depth estimation (MDE). As deep learning has become a key technology in computer vision, many researchers are trying to apply deep learning to VO and MDE. Just a couple of years ago, they were independently studied in a supervised way, but now they are coupled and trained together in an unsupervised way. However, before designing fancy models and losses, we have to customize datasets to use them for training and testing. After training, the model has to be compared with the existing models, which is also a huge burden. The benchmark provides input dataset ready-to-use for VO and MDE research in 'tfrecords' format and output dataset that includes model checkpoints and inference results of the existing models. It also provides various tools for data formatting, training, and evaluation. In the experiments, the exsiting models were evaluated to verify their performances presented in the corresponding papers and we found that the evaluation result is inferior to the presented performances.

Stereo Semi-direct Visual Odometry with Adaptive Motion Prior Weights of Lunar Exploration Rover (달 탐사 로버의 적응형 움직임 가중치에 따른 스테레오 준직접방식 비주얼 오도메트리)

  • Jung, Jae Hyung;Heo, Se Jong;Park, Chan Gook
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.46 no.6
    • /
    • pp.479-486
    • /
    • 2018
  • In order to ensure reliable navigation performance of a lunar exploration rover, navigation algorithms using additional sensors such as inertial measurement units and cameras are essential on lunar surface in the absence of a global navigation satellite system. Unprecedentedly, Visual Odometry (VO) using a stereo camera has been successfully implemented at the US Mars rovers. In this paper, we estimate the 6-DOF pose of the lunar exploration rover from gray images of a lunar-like terrains. The proposed algorithm estimates relative pose of consecutive images by sparse image alignment based semi-direct VO. In order to overcome vulnerability to non-linearity of direct VO, we add adaptive motion prior weights calculated from a linear function of the previous pose to the optimization cost function. The proposed algorithm is verified in lunar-like terrain dataset recorded by Toronto University reflecting the characteristics of the actual lunar environment.

Analysis of Applicability of Visual SLAM for Indoor Positioning in the Building Construction Site (Visual SLAM의 건설현장 실내 측위 활용성 분석)

  • Kim, Taejin;Park, Jiwon;Lee, Byoungmin;Bae, Kangmin;Yoon, Sebeen;Kim, Taehoon
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2022.11a
    • /
    • pp.47-48
    • /
    • 2022
  • The positioning technology that measures the position of a person or object is a key technology to deal with the location of the real coordinate system or converge the real and virtual worlds, such as digital twins, augmented reality, virtual reality, and autonomous driving. In estimating the location of a person or object at an indoor construction site, there are restrictions that it is impossible to receive location information from the outside, the communication infrastructure is insufficient, and it is difficult to install additional devices. Therefore, this study tested the direct sparse odometry algorithm, one of the visual Simultaneous Localization and Mapping (vSLAM) that estimate the current location and surrounding map using only image information, at an indoor construction site and analyzed its applicability as an indoor positioning technology. As a result, it was found that it is possible to properly estimate the surrounding map and the current location even in the indoor construction site, which has relatively few feature points. The results of this study can be used as reference data for researchers related to indoor positioning technology for construction sites in the future.

  • PDF

RGB-VO: Visual Odometry using mono RGB (단일 RGB 영상을 이용한 비주얼 오도메트리)

  • Lee, Joosung;Hwang, Sangwon;Kim, Woo Jin;Lee, Sangyoun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.05a
    • /
    • pp.454-456
    • /
    • 2018
  • 주율 주행과 로봇 시스템의 기술이 발전하면서 이와 관련된 영상 알고리즘들의 연구가 활발히 진행되고 있다. 제안 네트워크는 단일 영상을 이용하여 비주얼 오도메트리를 예측하는 시스템이다. 딥러닝 네트워크로 KITTI 데이터 세트를 이용하여 학습과 평가를 하며 네트워크의 입력으로는 연속된 두 개의 프레임이 들어가고 출력으로는 두 프레임간 카메라의 회전과 이동 정보가 된다. 이를 통하여 대표적으로 자동차의 주행 경로를 알 수 있으며 여러 로봇 시스템 등에서 활용할 수 있다.

Planetary Long-Range Deep 2D Global Localization Using Generative Adversarial Network (생성적 적대 신경망을 이용한 행성의 장거리 2차원 깊이 광역 위치 추정 방법)

  • Ahmed, M.Naguib;Nguyen, Tuan Anh;Islam, Naeem Ul;Kim, Jaewoong;Lee, Sukhan
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.1
    • /
    • pp.26-30
    • /
    • 2018
  • Planetary global localization is necessary for long-range rover missions in which communication with command center operator is throttled due to the long distance. There has been number of researches that address this problem by exploiting and matching rover surroundings with global digital elevation maps (DEM). Using conventional methods for matching, however, is challenging due to artifacts in both DEM rendered images, and/or rover 2D images caused by DEM low resolution, rover image illumination variations and small terrain features. In this work, we use train CNN discriminator to match rover 2D image with DEM rendered images using conditional Generative Adversarial Network architecture (cGAN). We then use this discriminator to search an uncertainty bound given by visual odometry (VO) error bound to estimate rover optimal location and orientation. We demonstrate our network capability to learn to translate rover image into DEM simulated image and match them using Devon Island dataset. The experimental results show that our proposed approach achieves ~74% mean average precision.