• Title/Summary/Keyword: Odometry

Search Result 95, Processing Time 0.025 seconds

Analysis of Applicability of Visual SLAM for Indoor Positioning in the Building Construction Site (Visual SLAM의 건설현장 실내 측위 활용성 분석)

  • Kim, Taejin;Park, Jiwon;Lee, Byoungmin;Bae, Kangmin;Yoon, Sebeen;Kim, Taehoon
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2022.11a
    • /
    • pp.47-48
    • /
    • 2022
  • The positioning technology that measures the position of a person or object is a key technology to deal with the location of the real coordinate system or converge the real and virtual worlds, such as digital twins, augmented reality, virtual reality, and autonomous driving. In estimating the location of a person or object at an indoor construction site, there are restrictions that it is impossible to receive location information from the outside, the communication infrastructure is insufficient, and it is difficult to install additional devices. Therefore, this study tested the direct sparse odometry algorithm, one of the visual Simultaneous Localization and Mapping (vSLAM) that estimate the current location and surrounding map using only image information, at an indoor construction site and analyzed its applicability as an indoor positioning technology. As a result, it was found that it is possible to properly estimate the surrounding map and the current location even in the indoor construction site, which has relatively few feature points. The results of this study can be used as reference data for researchers related to indoor positioning technology for construction sites in the future.

  • PDF

Learning-based Inertial-wheel Odometry for a Mobile Robot (모바일 로봇을 위한 학습 기반 관성-바퀴 오도메트리)

  • Myeongsoo Kim;Keunwoo Jang;Jaeheung Park
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.4
    • /
    • pp.427-435
    • /
    • 2023
  • This paper proposes a method of estimating the pose of a mobile robot by using a learning model. When estimating the pose of a mobile robot, wheel encoder and inertial measurement unit (IMU) data are generally utilized. However, depending on the condition of the ground surface, slip occurs due to interaction between the wheel and the floor. In this case, it is hard to predict pose accurately by using only encoder and IMU. Thus, in order to reduce pose error even in such conditions, this paper introduces a pose estimation method based on a learning model using data of the wheel encoder and IMU. As the learning model, long short-term memory (LSTM) network is adopted. The inputs to LSTM are velocity and acceleration data from the wheel encoder and IMU. Outputs from network are corrected linear and angular velocity. Estimated pose is calculated through numerically integrating output velocities. Dataset used as ground truth of learning model is collected in various ground conditions. Experimental results demonstrate that proposed learning model has higher accuracy of pose estimation than extended Kalman filter (EKF) and other learning models using the same data under various ground conditions.

Kalman Filter-based Sensor Fusion for Posture Stabilization of a Mobile Robot (모바일 로봇 자세 안정화를 위한 칼만 필터 기반 센서 퓨전)

  • Jang, Taeho;Kim, Youngshik;Kyoung, Minyoung;Yi, Hyunbean;Hwan, Yoondong
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.40 no.8
    • /
    • pp.703-710
    • /
    • 2016
  • In robotics research, accurate estimation of current robot position is important to achieve motion control of a robot. In this research, we focus on a sensor fusion method to provide improved position estimation for a wheeled mobile robot, considering two different sensor measurements. In this case, we fuse camera-based vision and encode-based odometry data using Kalman filter techniques to improve the position estimation of the robot. An external camera-based vision system provides global position coordinates (x, y) for the mobile robot in an indoor environment. An internal encoder-based odometry provides linear and angular velocities of the robot. We then use the position data estimated by the Kalman filter as inputs to the motion controller, which significantly improves performance of the motion controller. Finally, we experimentally verify the performance of the proposed sensor fused position estimation and motion controller using an actual mobile robot system. In our experiments, we also compare the Kalman filter-based sensor fused estimation with two different single sensor-based estimations (vision-based and odometry-based).

Position estimation and path-tracking for wheeled mobile robots with nonholonomic constraints (Nonholonomic 제약을 가지는 구륜 이동 로보트의 위치추정과 경로추적)

  • 정대경;문종우;박종국
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.932-935
    • /
    • 1996
  • This paper proposes position estimation and path-tracking of a wheeled-mobile robot(WMR). Odometry and two distance measuring sensors are used to measure distance between guide wall and body and to locate its own position. And extended Kalman filter is introduced to fusion sensors and reduce noise. State feedback controller using the estimated position and path-tracking miles guidance control system. The computer simulation shows that proposed algorithm is well coincide with theoretical approach.

  • PDF

Vision-based Autonomous Semantic Map Building and Robot Localization (영상 기반 자율적인 Semantic Map 제작과 로봇 위치 지정)

  • Lim, Joung-Hoon;Jeong, Seung-Do;Suh, Il-Hong;Choi, Byung-Uk
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.86-88
    • /
    • 2005
  • An autonomous semantic-map building method is proposed, with the robot localized in the semantic-map. Our semantic-map is organized by objects represented as SIFT features and vision-based relative localization is employed as a process model to implement extended Kalman filters. Thus, we expect that robust SLAM performance can be obtained even under poor conditions in which localization cannot be achieved by classical odometry-based SLAM

  • PDF

Localization Performance Enhancement on GPS Interfering Spot (GPS 음영지역 극복을 위한 이동로봇의 실험적 위치추정)

  • Kim, Ji-Yong;Lee, Ji-Hong;Byun, Jae-Min
    • Proceedings of the IEEK Conference
    • /
    • 2009.05a
    • /
    • pp.115-117
    • /
    • 2009
  • This paper presents localization performance enhancement on GPS interfering spot for mobile robot. Localization system applied Extended Kalman filter algorithm that utilized Diffrential GPS and odometry, inertial sensors. In this paper, different noise covariance is applied to Extended Kalman Filter according to the GPS quality. Experiment results show that proposed localization system improve considerably localization performance of mobile robots.

  • PDF

Vision-based Sensor Fusion of a Remotely Operated Vehicle for Underwater Structure Diagnostication (수중 구조물 진단용 원격 조종 로봇의 자세 제어를 위한 비전 기반 센서 융합)

  • Lee, Jae-Min;Kim, Gon-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.4
    • /
    • pp.349-355
    • /
    • 2015
  • Underwater robots generally show better performances for tasks than humans under certain underwater constraints such as. high pressure, limited light, etc. To properly diagnose in an underwater environment using remotely operated underwater vehicles, it is important to keep autonomously its own position and orientation in order to avoid additional control efforts. In this paper, we propose an efficient method to assist in the operation for the various disturbances of a remotely operated vehicle for the diagnosis of underwater structures. The conventional AHRS-based bearing estimation system did not work well due to incorrect measurements caused by the hard-iron effect when the robot is approaching a ferromagnetic structure. To overcome this drawback, we propose a sensor fusion algorithm with the camera and AHRS for estimating the pose of the ROV. However, the image information in the underwater environment is often unreliable and blurred by turbidity or suspended solids. Thus, we suggest an efficient method for fusing the vision sensor and the AHRS with a criterion which is the amount of blur in the image. To evaluate the amount of blur, we adopt two methods: one is the quantification of high frequency components using the power spectrum density analysis of 2D discrete Fourier transformed image, and the other is identifying the blur parameter based on cepstrum analysis. We evaluate the performance of the robustness of the visual odometry and blur estimation methods according to the change of light and distance. We verify that the blur estimation method based on cepstrum analysis shows a better performance through the experiments.

Vision-based Obstacle Detection using Geometric Analysis (기하학적 해석을 이용한 비전 기반의 장애물 검출)

  • Lee Jong-Shill;Lee Eung-Hyuk;Kim In-Young;Kim Sun-I.
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.43 no.3 s.309
    • /
    • pp.8-15
    • /
    • 2006
  • Obstacle detection is an important task for many mobile robot applications. The methods using stereo vision and optical flow are computationally expensive. Therefore, this paper presents a vision-based obstacle detection method using only two view images. The method uses a single passive camera and odometry, performs in real-time. The proposed method is an obstacle detection method using 3D reconstruction from taro views. Processing begins with feature extraction for each input image using Dr. Lowe's SIFT(Scale Invariant Feature Transform) and establish the correspondence of features across input images. Using extrinsic camera rotation and translation matrix which is provided by odometry, we could calculate the 3D position of these corresponding points by triangulation. The results of triangulation are partial 3D reconstruction for obstacles. The proposed method has been tested successfully on an indoor mobile robot and is able to detect obstacles at 75msec.

Method to Improve Localization and Mapping Accuracy on the Urban Road Using GPS, Monocular Camera and HD Map (GPS와 단안카메라, HD Map을 이용한 도심 도로상에서의 위치측정 및 맵핑 정확도 향상 방안)

  • Kim, Young-Hun;Kim, Jae-Myeong;Kim, Gi-Chang;Choi, Yun-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1095-1109
    • /
    • 2021
  • The technology used to recognize the location and surroundings of autonomous vehicles is called SLAM. SLAM standsfor Simultaneously Localization and Mapping and hasrecently been actively utilized in research on autonomous vehicles,starting with robotic research. Expensive GPS, INS, LiDAR, RADAR, and Wheel Odometry allow precise magnetic positioning and mapping in centimeters. However, if it can secure similar accuracy as using cheaper Cameras and GPS data, it will contribute to advancing the era of autonomous driving. In this paper, we present a method for converging monocular camera with RTK-enabled GPS data to perform RMSE 33.7 cm localization and mapping on the urban road.

LiDAR Static Obstacle Map based Vehicle Dynamic State Estimation Algorithm for Urban Autonomous Driving (도심자율주행을 위한 라이다 정지 장애물 지도 기반 차량 동적 상태 추정 알고리즘)

  • Kim, Jongho;Lee, Hojoon;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.14-19
    • /
    • 2021
  • This paper presents LiDAR static obstacle map based vehicle dynamic state estimation algorithm for urban autonomous driving. In an autonomous driving, state estimation of host vehicle is important for accurate prediction of ego motion and perceived object. Therefore, in a situation in which noise exists in the control input of the vehicle, state estimation using sensor such as LiDAR and vision is required. However, it is difficult to obtain a measurement for the vehicle state because the recognition sensor of autonomous vehicle perceives including a dynamic object. The proposed algorithm consists of two parts. First, a Bayesian rule-based static obstacle map is constructed using continuous LiDAR point cloud input. Second, vehicle odometry during the time interval is calculated by matching the static obstacle map using Normal Distribution Transformation (NDT) method. And the velocity and yaw rate of vehicle are estimated based on the Extended Kalman Filter (EKF) using vehicle odometry as measurement. The proposed algorithm is implemented in the Linux Robot Operating System (ROS) environment, and is verified with data obtained from actual driving on urban roads. The test results show a more robust and accurate dynamic state estimation result when there is a bias in the chassis IMU sensor.