• Title/Summary/Keyword: visual odometry

Search Result 35, Processing Time 0.024 seconds

Odometry Using Strong Features of Recognized Text (인식된 문자의 강한 특징점을 활용하는 측위시스템)

  • Song, Do-hoon;Park, Jong-il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.219-222
    • /
    • 2021
  • 본 논문에서는 시각-관성 측위시스템(Visual-Inertial Odometry, VIO)에서 광학 문자 인식(Optical Character Recognition, OCR)을 활용해 문자의 영역을 찾아내고, 그 위치를 기억해 측위시스템에서 다시 인식되었을 때 비교하기 위해 위치와 특징점을 저장하고자 한다. 먼저, 실시간으로 움직이는 카메라의 영상에서 문자를 찾아내고, 카메라의 상대적인 위치를 이용하여 문자가 인식된 위치와 특징점을 저장하는 방법을 제안한다. 또한 저장된 문자가 다시 탐색되었을 때, 문자가 재인식되었는 지 판별하기 위한 방법을 제안한다. 인공적인 마커나 미리 학습된 객체를 사용하지 않고 상황에 따른 문자를 사용하는 이 방법은 문자가 존재하는 범용적인 공간에서 사용이 가능하다.

  • PDF

Image Mosaicking Considering Pairwise Registrability in Structure Inspection with Underwater Robots (수중 로봇을 이용한 구조물 검사에서의 상호 정합도를 고려한 영상 모자이킹)

  • Hong, Seonghun
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.3
    • /
    • pp.238-244
    • /
    • 2021
  • Image mosaicking is a common and useful technique to visualize a global map by stitching a large number of local images obtained from visual surveys in underwater environments. In particular, visual inspection of underwater structures using underwater robots can be a potential application for image mosaicking. Feature-based pairwise image registration is a commonly employed process in most image mosaicking algorithms to estimate visual odometry information between compared images. However, visual features are not always uniformly distributed on the surface of underwater structures, and thus the performance of image registration can vary significantly, which results in unnecessary computations in image matching for poor-conditioned image pairs. This study proposes a pairwise registrability measure to select informative image pairs and to improve the overall computational efficiency of underwater image mosaicking algorithms. The validity and effectiveness of the image mosaicking algorithm considering the pairwise registrability are demonstrated using an experimental dataset obtained with a full-scale ship in a real sea environment.

Vision-based Sensor Fusion of a Remotely Operated Vehicle for Underwater Structure Diagnostication (수중 구조물 진단용 원격 조종 로봇의 자세 제어를 위한 비전 기반 센서 융합)

  • Lee, Jae-Min;Kim, Gon-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.4
    • /
    • pp.349-355
    • /
    • 2015
  • Underwater robots generally show better performances for tasks than humans under certain underwater constraints such as. high pressure, limited light, etc. To properly diagnose in an underwater environment using remotely operated underwater vehicles, it is important to keep autonomously its own position and orientation in order to avoid additional control efforts. In this paper, we propose an efficient method to assist in the operation for the various disturbances of a remotely operated vehicle for the diagnosis of underwater structures. The conventional AHRS-based bearing estimation system did not work well due to incorrect measurements caused by the hard-iron effect when the robot is approaching a ferromagnetic structure. To overcome this drawback, we propose a sensor fusion algorithm with the camera and AHRS for estimating the pose of the ROV. However, the image information in the underwater environment is often unreliable and blurred by turbidity or suspended solids. Thus, we suggest an efficient method for fusing the vision sensor and the AHRS with a criterion which is the amount of blur in the image. To evaluate the amount of blur, we adopt two methods: one is the quantification of high frequency components using the power spectrum density analysis of 2D discrete Fourier transformed image, and the other is identifying the blur parameter based on cepstrum analysis. We evaluate the performance of the robustness of the visual odometry and blur estimation methods according to the change of light and distance. We verify that the blur estimation method based on cepstrum analysis shows a better performance through the experiments.

Method to Improve Localization and Mapping Accuracy on the Urban Road Using GPS, Monocular Camera and HD Map (GPS와 단안카메라, HD Map을 이용한 도심 도로상에서의 위치측정 및 맵핑 정확도 향상 방안)

  • Kim, Young-Hun;Kim, Jae-Myeong;Kim, Gi-Chang;Choi, Yun-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1095-1109
    • /
    • 2021
  • The technology used to recognize the location and surroundings of autonomous vehicles is called SLAM. SLAM standsfor Simultaneously Localization and Mapping and hasrecently been actively utilized in research on autonomous vehicles,starting with robotic research. Expensive GPS, INS, LiDAR, RADAR, and Wheel Odometry allow precise magnetic positioning and mapping in centimeters. However, if it can secure similar accuracy as using cheaper Cameras and GPS data, it will contribute to advancing the era of autonomous driving. In this paper, we present a method for converging monocular camera with RTK-enabled GPS data to perform RMSE 33.7 cm localization and mapping on the urban road.

Performance Evaluation of a Compressed-State Constraint Kalman Filter for a Visual/Inertial/GNSS Navigation System

  • Yu Dam Lee;Taek Geun Lee;Hyung Keun Lee
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.12 no.2
    • /
    • pp.129-140
    • /
    • 2023
  • Autonomous driving systems are likely to be operated in various complex environments. However, the well-known integrated Global Navigation Satellite System (GNSS)/Inertial Navigation System (INS), which is currently the major source for absolute position information, still has difficulties in accurate positioning in harsh signal environments such as urban canyons. To overcome these difficulties, integrated Visual/Inertial/GNSS (VIG) navigation systems have been extensively studied in various areas. Recently, a Compressed-State Constraint Kalman Filter (CSCKF)-based VIG navigation system (CSCKF-VIG) using a monocular camera, an Inertial Measurement Unit (IMU), and GNSS receivers has been studied with the aim of providing robust and accurate position information in urban areas. For this new filter-based navigation system, on the basis of time-propagation measurement fusion theory, unnecessary camera states are not required in the system state. This paper presents a performance evaluation of the CSCKF-VIG system compared to other conventional navigation systems. First, the CSCKF-VIG is introduced in detail compared to the well-known Multi-State Constraint Kalman Filter (MSCKF). The CSCKF-VIG system is then evaluated by a field experiment in different GNSS availability situations. The results show that accuracy is improved in the GNSS-degraded environment compared to that of the conventional systems.

Image Feature-Based Real-Time RGB-D 3D SLAM with GPU Acceleration (GPU 가속화를 통한 이미지 특징점 기반 RGB-D 3차원 SLAM)

  • Lee, Donghwa;Kim, Hyongjin;Myung, Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.5
    • /
    • pp.457-461
    • /
    • 2013
  • This paper proposes an image feature-based real-time RGB-D (Red-Green-Blue Depth) 3D SLAM (Simultaneous Localization and Mapping) system. RGB-D data from Kinect style sensors contain a 2D image and per-pixel depth information. 6-DOF (Degree-of-Freedom) visual odometry is obtained through the 3D-RANSAC (RANdom SAmple Consensus) algorithm with 2D image features and depth data. For speed up extraction of features, parallel computation is performed with GPU acceleration. After a feature manager detects a loop closure, a graph-based SLAM algorithm optimizes trajectory of the sensor and builds a 3D point cloud based map.

Monocular Vision and Odometry-Based SLAM Using Position and Orientation of Ceiling Lamps (천장 조명의 위치와 방위 정보를 이용한 모노카메라와 오도메트리 정보 기반의 SLAM)

  • Hwang, Seo-Yeon;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.2
    • /
    • pp.164-170
    • /
    • 2011
  • This paper proposes a novel monocular vision-based SLAM (Simultaneous Localization and Mapping) method using both position and orientation information of ceiling lamps. Conventional approaches used corner or line features as landmarks in their SLAM algorithms, but these methods were often unable to achieve stable navigation due to a lack of reliable visual features on the ceiling. Since lamp features are usually placed some distances from each other in indoor environments, they can be robustly detected and used as reliable landmarks. We used both the position and orientation of a lamp feature to accurately estimate the robot pose. Its orientation is obtained by calculating the principal axis from the pixel distribution of the lamp area. Both corner and lamp features are used as landmarks in the EKF (Extended Kalman Filter) to increase the stability of the SLAM process. Experimental results show that the proposed scheme works successfully in various indoor environments.

Real-Time Precision Vehicle Localization Using Numerical Maps

  • Han, Seung-Jun;Choi, Jeongdan
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.968-978
    • /
    • 2014
  • Autonomous vehicle technology based on information technology and software will lead the automotive industry in the near future. Vehicle localization technology is a core expertise geared toward developing autonomous vehicles and will provide location information for control and decision. This paper proposes an effective vision-based localization technology to be applied to autonomous vehicles. In particular, the proposed technology makes use of numerical maps that are widely used in the field of geographic information systems and that have already been built in advance. Optimum vehicle ego-motion estimation and road marking feature extraction techniques are adopted and then combined by an extended Kalman filter and particle filter to make up the localization technology. The implementation results of this paper show remarkable results; namely, an 18 ms mean processing time and 10 cm location error. In addition, autonomous driving and parking are successfully completed with an unmanned vehicle within a $300m{\times}500m$ space.

Robust Real-Time Visual Odometry Estimation from RGB-D Images (RGB-D 영상을 이용한 강건한 실시간 시각 주행 거리 측정)

  • Kim, Joo-Hee;Kim, Hye-Suk;Kim, Dong-Ha;Kim, In-Cheol
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.11a
    • /
    • pp.825-828
    • /
    • 2014
  • 본 논문에서는 3차원 공간에서 6자유도로 움직이는 카메라의 실시간 포즈를 추적하기 위해, RGB-D 입력 영상들로부터 카메라의 실시간 주행 거리를 효과적으로 계산할 수 있는 시각 주행 거리 측정기를 제안한다. 본 논문에서 제안하는 시각 주행 거리 측정기에서는 컬러 영상과 깊이 영상의 풍부한 정보를 충분히 활용하면서도 실시간 계산량을 줄이기 위해, 특징점 위주의 저밀도 주행 거리 계산 방법을 사용한다. 또한, 본 시스템에서는 정확도 향상을 위해, 정합된 특징점들에 대한 추가적인 정상 집합정제 과정과 이들을 이용한 주행 거리 정제 작업을 반복하도록 설계하였다. TUM 대학의 벤치마크 데이터 집합을 이용하여 다양한 성능 분석 실험을 수행하였고, 이를 통해 본 논문에서 제안하는 시각 주행 거리 측정기의 높은 성능을 확인할 수 있었다.

RGB-VO: Visual Odometry using mono RGB (단일 RGB 영상을 이용한 비주얼 오도메트리)

  • Lee, Joosung;Hwang, Sangwon;Kim, Woo Jin;Lee, Sangyoun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.05a
    • /
    • pp.454-456
    • /
    • 2018
  • 주율 주행과 로봇 시스템의 기술이 발전하면서 이와 관련된 영상 알고리즘들의 연구가 활발히 진행되고 있다. 제안 네트워크는 단일 영상을 이용하여 비주얼 오도메트리를 예측하는 시스템이다. 딥러닝 네트워크로 KITTI 데이터 세트를 이용하여 학습과 평가를 하며 네트워크의 입력으로는 연속된 두 개의 프레임이 들어가고 출력으로는 두 프레임간 카메라의 회전과 이동 정보가 된다. 이를 통하여 대표적으로 자동차의 주행 경로를 알 수 있으며 여러 로봇 시스템 등에서 활용할 수 있다.