• 제목/요약/키워드: visual odometry

검색결과 35건 처리시간 0.018초

열화상 이미지 다중 채널 재매핑을 통한 단일 열화상 이미지 깊이 추정 향상 (Enhancing Single Thermal Image Depth Estimation via Multi-Channel Remapping for Thermal Images)

  • 김정윤;전명환;김아영
    • 로봇학회논문지
    • /
    • 제17권3호
    • /
    • pp.314-321
    • /
    • 2022
  • Depth information used in SLAM and visual odometry is essential in robotics. Depth information often obtained from sensors or learned by networks. While learning-based methods have gained popularity, they are mostly limited to RGB images. However, the limitation of RGB images occurs in visually derailed environments. Thermal cameras are in the spotlight as a way to solve these problems. Unlike RGB images, thermal images reliably perceive the environment regardless of the illumination variance but show lacking contrast and texture. This low contrast in the thermal image prohibits an algorithm from effectively learning the underlying scene details. To tackle these challenges, we propose multi-channel remapping for contrast. Our method allows a learning-based depth prediction model to have an accurate depth prediction even in low light conditions. We validate the feasibility and show that our multi-channel remapping method outperforms the existing methods both visually and quantitatively over our dataset.

Robust Features and Accurate Inliers Detection Framework: Application to Stereo Ego-motion Estimation

  • MIN, Haigen;ZHAO, Xiangmo;XU, Zhigang;ZHANG, Licheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권1호
    • /
    • pp.302-320
    • /
    • 2017
  • In this paper, an innovative robust feature detection and matching strategy for visual odometry based on stereo image sequence is proposed. First, a sparse multiscale 2D local invariant feature detection and description algorithm AKAZE is adopted to extract the interest points. A robust feature matching strategy is introduced to match AKAZE descriptors. In order to remove the outliers which are mismatched features or on dynamic objects, an improved random sample consensus outlier rejection scheme is presented. Thus the proposed method can be applied to dynamic environment. Then, geometric constraints are incorporated into the motion estimation without time-consuming 3-dimensional scene reconstruction. Last, an iterated sigma point Kalman Filter is adopted to refine the motion results. The presented ego-motion scheme is applied to benchmark datasets and compared with state-of-the-art approaches with data captured on campus in a considerably cluttered environment, where the superiorities are proved.

3차원 직선을 이용한 카메라 모션 추정 (Motion Estimation Using 3-D Straight Lines)

  • 이진한;장국현;서일홍
    • 로봇학회논문지
    • /
    • 제11권4호
    • /
    • pp.300-309
    • /
    • 2016
  • This paper proposes a method for motion estimation of consecutive cameras using 3-D straight lines. The motion estimation algorithm uses two non-parallel 3-D line correspondences to quickly establish an initial guess for the relative pose of adjacent frames, which requires less correspondences than that of current approaches requiring three correspondences when using 3-D points or 3-D planes. The estimated motion is further refined by a nonlinear optimization technique with inlier correspondences for higher accuracy. Since there is no dominant line representation in 3-D space, we simulate two line representations, which can be thought as mainly adopted methods in the field, and verify one as the best choice from the simulation results. We also propose a simple but effective 3-D line fitting algorithm considering the fact that the variance arises in the projective directions thus can be reduced to 2-D fitting problem. We provide experimental results of the proposed motion estimation system comparing with state-of-the-art algorithms using an open benchmark dataset.

스마트폰 카메라의 이동 위치 추정 기술 연구 (A Study on Estimating Smartphone Camera Position)

  • 오종택;윤소정
    • 한국인터넷방송통신학회논문지
    • /
    • 제21권6호
    • /
    • pp.99-104
    • /
    • 2021
  • 스마트폰과 같은 단안 카메라를 이용하여 이동 궤적을 추정하고, 주변 3차원 영상을 구성하는 기술은 실내 위치 추정뿐만 아니라 메타버스 서비스에서도 핵심이다. 이 기술에서 가장 중요한 것은 이동하는 카메라 중심의 좌표를 추정하는 것인데, 본 논문에서는 기하학적으로 이동 거리를 추정하는 새로운 알고리즘을 제안하였다. 첫 번째와 두 번째 사진으로 3차원 물체점의 좌표를 구하고, 첫 번째와 세 번째 사진의 일치되는 특징점을 이용하여 이동 거리 벡터를 구한 후에, 세 번째 카메라의 원점 좌표를 이동하며 3차원 물체점과 세 번째 사진의 특징점이 일치되는 위치를 구한다. 실제 연속적인 영상 데이터에 적용하여 그 가능성과 정확성이 검증되었다.

생성적 적대 신경망을 이용한 행성의 장거리 2차원 깊이 광역 위치 추정 방법 (Planetary Long-Range Deep 2D Global Localization Using Generative Adversarial Network)

  • 아하메드 엠.나기브;투안 아인 뉴엔;나임 울 이슬람;김재웅;이석한
    • 로봇학회논문지
    • /
    • 제13권1호
    • /
    • pp.26-30
    • /
    • 2018
  • Planetary global localization is necessary for long-range rover missions in which communication with command center operator is throttled due to the long distance. There has been number of researches that address this problem by exploiting and matching rover surroundings with global digital elevation maps (DEM). Using conventional methods for matching, however, is challenging due to artifacts in both DEM rendered images, and/or rover 2D images caused by DEM low resolution, rover image illumination variations and small terrain features. In this work, we use train CNN discriminator to match rover 2D image with DEM rendered images using conditional Generative Adversarial Network architecture (cGAN). We then use this discriminator to search an uncertainty bound given by visual odometry (VO) error bound to estimate rover optimal location and orientation. We demonstrate our network capability to learn to translate rover image into DEM simulated image and match them using Devon Island dataset. The experimental results show that our proposed approach achieves ~74% mean average precision.