• Title/Summary/Keyword: Odometry

Search Result 95, Processing Time 0.022 seconds

CALOS : Camera And Laser for Odometry Sensing (CALOS : 주행계 추정을 위한 카메라와 레이저 융합)

  • Bok, Yun-Su;Hwang, Young-Bae;Kweon, In-So
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.2
    • /
    • pp.180-187
    • /
    • 2006
  • This paper presents a new sensor system, CALOS, for motion estimation and 3D reconstruction. The 2D laser sensor provides accurate depth information of a plane, not the whole 3D structure. On the contrary, the CCD cameras provide the projected image of whole 3D scene, not the depth of the scene. To overcome the limitations, we combine these two types of sensors, the laser sensor and the CCD cameras. We develop a motion estimation scheme appropriate for this sensor system. In the proposed scheme, the motion between two frames is estimated by using three points among the scan data and their corresponding image points, and refined by non-linear optimization. We validate the accuracy of the proposed method by 3D reconstruction using real images. The results show that the proposed system can be a practical solution for motion estimation as well as for 3D reconstruction.

  • PDF

Outdoor Localization of a Mobile Robot Using Weighted GPS Data and Map Information (가중화된 GPS 정보와 지도정보를 활용한 실외 이동로봇의 위치추정)

  • Bae, Ji-Hun;Song, Jae-Bok;Choi, Ji-Hoon
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.3
    • /
    • pp.292-300
    • /
    • 2011
  • Global positioning system (GPS) is widely used to measure the position of a vehicle. However, the accuracy of the GPS can be severely affected by surrounding environmental conditions. To deal with this problem, the GPS and odometry data can be combined using an extended Kalman filter. For stable navigation of an outdoor mobile robot using the GPS, this paper proposes two methods to evaluate the reliability of the GPS data. The first method is to calculate the standard deviation of the GPS data and reflect it to deal with the uncertainty of the GPS data. The second method is to match the GPS data to the traversability map which can be obtained by classifying outdoor terrain data. By matching of the GPS data with the traversability map, we can determine whether to use the GPS data or not. The experimental results show that the proposed methods can enhance the performance of the GPS-based outdoor localization.

Real-Time Precision Vehicle Localization Using Numerical Maps

  • Han, Seung-Jun;Choi, Jeongdan
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.968-978
    • /
    • 2014
  • Autonomous vehicle technology based on information technology and software will lead the automotive industry in the near future. Vehicle localization technology is a core expertise geared toward developing autonomous vehicles and will provide location information for control and decision. This paper proposes an effective vision-based localization technology to be applied to autonomous vehicles. In particular, the proposed technology makes use of numerical maps that are widely used in the field of geographic information systems and that have already been built in advance. Optimum vehicle ego-motion estimation and road marking feature extraction techniques are adopted and then combined by an extended Kalman filter and particle filter to make up the localization technology. The implementation results of this paper show remarkable results; namely, an 18 ms mean processing time and 10 cm location error. In addition, autonomous driving and parking are successfully completed with an unmanned vehicle within a $300m{\times}500m$ space.

Robust Optical Odometry Using Three Optical Mice (3개의 광 마우스를 이용한 강건한 광학식 거리주행계)

  • Kim, Sung-Bok;Kim, Hyung-Gi
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.9
    • /
    • pp.861-867
    • /
    • 2006
  • This paper presents the robust mobile robot localization method exploiting redundant motion information acquired from three optical mice that are installed at the bottom of a mobile robot in a regular triangular form. First, we briefly introduce a low-cost optical motion sensor, HDNS-2000, and a commercial device driver development tools, WinDriver, to be used in this research. Second, we explain the basic principle of the mobile robot localization using the motion information from three optical mice, and propose the least squares based localization algorithm which is robust to the noisy measurement and partial malfunctioning of optical mice. Third, we describe the development of the experimental optical odometer using three PC optical mice and the user-friendly graphic monitoring program. Fourth, simulations and experiments are performed to demonstrate the validity of the proposed localization method and the operation of the developed optical odometer. Finally, along with the conclusion, we suggest some future work including the installation parameter calibration, the optical mouse remodelling, and the high-performance motion sensor adoption.

Artificial Landmark based Pose-Graph SLAM for AGVs in Factory Environments (공장환경에서 AGV를 위한 인공표식 기반의 포즈그래프 SLAM)

  • Heo, Hwan;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.2
    • /
    • pp.112-118
    • /
    • 2015
  • This paper proposes a pose-graph based SLAM method using an upward-looking camera and artificial landmarks for AGVs in factory environments. The proposed method provides a way to acquire the camera extrinsic matrix and improves the accuracy of feature observation using a low-cost camera. SLAM is conducted by optimizing AGV's explored path using the artificial landmarks installed on the ceiling at various locations. As the AGV explores, the pose nodes are added based on the certain distance from odometry and the landmark nodes are registered when AGV recognizes the fiducial marks. As a result of the proposed scheme, a graph network is created and optimized through a G2O optimization tool so that the accumulated error due to the slip is minimized. The experiment shows that the proposed method is robust for SLAM in real factory environments.

Robust Real-Time Visual Odometry Estimation from RGB-D Images (RGB-D 영상을 이용한 강건한 실시간 시각 주행 거리 측정)

  • Kim, Joo-Hee;Kim, Hye-Suk;Kim, Dong-Ha;Kim, In-Cheol
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.11a
    • /
    • pp.825-828
    • /
    • 2014
  • 본 논문에서는 3차원 공간에서 6자유도로 움직이는 카메라의 실시간 포즈를 추적하기 위해, RGB-D 입력 영상들로부터 카메라의 실시간 주행 거리를 효과적으로 계산할 수 있는 시각 주행 거리 측정기를 제안한다. 본 논문에서 제안하는 시각 주행 거리 측정기에서는 컬러 영상과 깊이 영상의 풍부한 정보를 충분히 활용하면서도 실시간 계산량을 줄이기 위해, 특징점 위주의 저밀도 주행 거리 계산 방법을 사용한다. 또한, 본 시스템에서는 정확도 향상을 위해, 정합된 특징점들에 대한 추가적인 정상 집합정제 과정과 이들을 이용한 주행 거리 정제 작업을 반복하도록 설계하였다. TUM 대학의 벤치마크 데이터 집합을 이용하여 다양한 성능 분석 실험을 수행하였고, 이를 통해 본 논문에서 제안하는 시각 주행 거리 측정기의 높은 성능을 확인할 수 있었다.

Control and Calibration for Robot Navigation based on Light's Panel Landmark (천장 전등패널 기반 로봇의 주행오차 보정과 제어)

  • Jin, Tae-Seok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.20 no.2
    • /
    • pp.89-95
    • /
    • 2017
  • In this paper, we suggest the method for a mobile robot to move safely from an initial position to a goal position in the wide environment like a building. There is a problem using odometry encoder sensor to estimate the position of a mobile robot in the wide environment like a building. Because of the phenomenon of wheel's slipping, a encoder sensor has the accumulated error of a sensor measurement as time. Therefore the error must be compensated with using other sensor. A vision sensor is used to compensate the position of a mobile robot as using the regularly attached light's panel on a building's ceiling. The method to create global path planning for a mobile robot model a building's map as a graph data type. Consequently, we can apply floyd's shortest path algorithm to find the path planning. The effectiveness of the method is verified through simulations and experiments.

RGB-VO: Visual Odometry using mono RGB (단일 RGB 영상을 이용한 비주얼 오도메트리)

  • Lee, Joosung;Hwang, Sangwon;Kim, Woo Jin;Lee, Sangyoun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.05a
    • /
    • pp.454-456
    • /
    • 2018
  • 주율 주행과 로봇 시스템의 기술이 발전하면서 이와 관련된 영상 알고리즘들의 연구가 활발히 진행되고 있다. 제안 네트워크는 단일 영상을 이용하여 비주얼 오도메트리를 예측하는 시스템이다. 딥러닝 네트워크로 KITTI 데이터 세트를 이용하여 학습과 평가를 하며 네트워크의 입력으로는 연속된 두 개의 프레임이 들어가고 출력으로는 두 프레임간 카메라의 회전과 이동 정보가 된다. 이를 통하여 대표적으로 자동차의 주행 경로를 알 수 있으며 여러 로봇 시스템 등에서 활용할 수 있다.

Image Mosaicking Considering Pairwise Registrability in Structure Inspection with Underwater Robots (수중 로봇을 이용한 구조물 검사에서의 상호 정합도를 고려한 영상 모자이킹)

  • Hong, Seonghun
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.3
    • /
    • pp.238-244
    • /
    • 2021
  • Image mosaicking is a common and useful technique to visualize a global map by stitching a large number of local images obtained from visual surveys in underwater environments. In particular, visual inspection of underwater structures using underwater robots can be a potential application for image mosaicking. Feature-based pairwise image registration is a commonly employed process in most image mosaicking algorithms to estimate visual odometry information between compared images. However, visual features are not always uniformly distributed on the surface of underwater structures, and thus the performance of image registration can vary significantly, which results in unnecessary computations in image matching for poor-conditioned image pairs. This study proposes a pairwise registrability measure to select informative image pairs and to improve the overall computational efficiency of underwater image mosaicking algorithms. The validity and effectiveness of the image mosaicking algorithm considering the pairwise registrability are demonstrated using an experimental dataset obtained with a full-scale ship in a real sea environment.

Enhancing Single Thermal Image Depth Estimation via Multi-Channel Remapping for Thermal Images (열화상 이미지 다중 채널 재매핑을 통한 단일 열화상 이미지 깊이 추정 향상)

  • Kim, Jeongyun;Jeon, Myung-Hwan;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.314-321
    • /
    • 2022
  • Depth information used in SLAM and visual odometry is essential in robotics. Depth information often obtained from sensors or learned by networks. While learning-based methods have gained popularity, they are mostly limited to RGB images. However, the limitation of RGB images occurs in visually derailed environments. Thermal cameras are in the spotlight as a way to solve these problems. Unlike RGB images, thermal images reliably perceive the environment regardless of the illumination variance but show lacking contrast and texture. This low contrast in the thermal image prohibits an algorithm from effectively learning the underlying scene details. To tackle these challenges, we propose multi-channel remapping for contrast. Our method allows a learning-based depth prediction model to have an accurate depth prediction even in low light conditions. We validate the feasibility and show that our multi-channel remapping method outperforms the existing methods both visually and quantitatively over our dataset.