• 제목/요약/키워드: Simultaneous Localization And Mapping

검색결과 130건 처리시간 0.029초

이동 로봇 주행을 위한 이미지 매칭에 기반한 레이저 영상 SLAM (Laser Image SLAM based on Image Matching for Navigation of a Mobile Robot)

  • 최윤원;김경동;최정원;이석규
    • 한국정밀공학회지
    • /
    • 제30권2호
    • /
    • pp.177-184
    • /
    • 2013
  • This paper proposes an enhanced Simultaneous Localization and Mapping (SLAM) algorithm based on matching laser image and Extended Kalman Filter (EKF). In general, laser information is one of the most efficient data for localization of mobile robots and is more accurate than encoder data. For localization of a mobile robot, moving distance information of a robot is often obtained by encoders and distance information from the robot to landmarks is estimated by various sensors. Though encoder has high resolution, it is difficult to estimate current position of a robot precisely because of encoder error caused by slip and backlash of wheels. In this paper, the position and angle of the robot are estimated by comparing laser images obtained from laser scanner with high accuracy. In addition, Speeded Up Robust Features (SURF) is used for extracting feature points at previous laser image and current laser image by comparing feature points. As a result, the moving distance and heading angle are obtained based on information of available points. The experimental results using the proposed laser slam algorithm show effectiveness for the SLAM of robot.

대규모 개발지역의 공간정보 구축을 위한 드론 라이다의 특징 비교 (Comparison of Characteristics of Drone LiDAR for Construction of Geospatial Information in Large-scale Development Project Area)

  • 박준규;이근왕
    • 한국산학기술학회논문지
    • /
    • 제21권1호
    • /
    • pp.768-773
    • /
    • 2020
  • 국토자원의 합리적 이용과 관리를 위한 대규모 국토개발은 효율적인 사업관리를 위해 공간정보의 활용이 필수적이다. 최근 택지조성이나 노천광산과 같은 대규모 개발지역의 효과적인 공간정보 구축 방안으로 드론 LiDAR(Light Detection And Ranging)가 주목 받고 있다. 드론 LiDAR는 크게 SLAM(Simultaneous Localization And Mapping) 기술이 적용된 방식과 GNSS(Global Navigation Satellite System)/IMU(Inertial Measurement Unit) 방식으로 구분할 수 있는데 드론 LiDAR의 적용이나 각 방식의 특징에 대한 분석적 연구는 부족한 실정이다. 이에 본 연구에서는 SLAM 및 GNSS/IMU 방식의 드론 LiDAR를 이용한 데이터 취득, 처리 및 분석을 수행하고, 각각의 특징과 활용성을 평가하고자 하였다. 연구결과, 드론 LiDAR의 높이 방향 정확도는 -0.052~0.044m로 지도제작을 위한 공간정보의 허용 정확도를 만족하는 것으로 나타났다. 또한 데이터 취득 및 처리 과정의 비교를 통해 각각의 방법에 대한 특징을 제시하였다. 드론 LiDAR를 통해 구축된 공간정보는 거리, 면적, 경사의 측정 등 다양한 활용이 가능하며, 이러한 정보를 기반으로 대규모 개발지역의 안전도를 평가하는 것이 가능하기 때문에 향후 국토개발 현장에서 효과적인 공간정보 구축 방안으로 활용이 기대된다.

실내 정보 가시화에 의한 u-GIS 시스템을 위한 Markerless 증강현실 방법 (A Markerless Augmented Reality Approach for Indoor Information Visualization System)

  • 김희관;조현달
    • 한국공간정보시스템학회 논문지
    • /
    • 제11권1호
    • /
    • pp.195-199
    • /
    • 2009
  • 증강현실 기술은 실제 환경에 컴퓨터로부터 생성된 가상 데이터를 실시간으로 덧씌우는 기술을 말하며, 이는 지리정보의 가시화 같은 작업에 매우 큰 잠재력을 갖고 있다. 하지만 지금까지 연구된 이동형 증강현실 시스템은 사용자의 위치를 파악하기 위해 GPS(Global Positioning System)를 사용하거나 마커를 현장에 붙이는 방식을 사용하였다. 최근 연구들은 마커를 사용하지 않는 방법을 지향하고 있으나 많은 제약을 갖고 있다. 특히 실내의 경우는 GPS정보를 사용할 수 없기 때문에 실내 위치파악을 위해서는 좀 더 복잡한 문제들을 해결할 수 있는 새로운 기술이 필요하다. 최근 무선(RF, Radio Frequency)기반의 실내 위치 추정 연구가 활발히 수행되고 있지만, 이 또한 다량의 센서와 인식기를 설치해야한다는 제약이 존재한다. 본 연구에서는 한 대의 카메라를 사용하는 SLAM(Simultaneous Localization and Mapping) 알고리듬을 이용한 위치 추정기법을 제시하였으며, 추정된 위치를 이용하여 증강현실을 통한 정보 가시화 프로그램을 개발하였으며 이를. 향후 실내외 seamless 연동이 가능한 모바일 u-GIS (Ubiquitous Geospatial Information System) 시스템에 적용할 것이다.

  • PDF

사이드 스캔 소나 기반 Pose-graph SLAM (Side Scan Sonar based Pose-graph SLAM)

  • 권대현;김주완;김문환;박호규;김태영;김아영
    • 로봇학회논문지
    • /
    • 제12권4호
    • /
    • pp.385-394
    • /
    • 2017
  • Side scanning sonar (SSS) provides valuable information for robot navigation. However using the side scanning sonar images in the navigation was not fully studied. In this paper, we use range data, and side scanning sonar images from UnderWater Simulator (UWSim) and propose measurement models in a feature based simultaneous localization and mapping (SLAM) framework. The range data is obtained by echosounder and sidescanning sonar images from side scan sonar module for UWSim. For the feature, we used the A-KAZE feature for the SSS image matching and adjusting the relative robot pose by SSS bundle adjustment (BA) with Ceres solver. We use BA for the loop closure constraint of pose-graph SLAM. We used the Incremental Smoothing and Mapping (iSAM) to optimize the graph. The optimized trajectory was compared against the dead reckoning (DR).

특징점 기반 단안 영상 SLAM의 최적화 기법 및 필터링 기법 성능 분석 (Performance Analysis of Optimization Method and Filtering Method for Feature-based Monocular Visual SLAM)

  • 전진석;김효중;심덕선
    • 전기학회논문지
    • /
    • 제68권1호
    • /
    • pp.182-188
    • /
    • 2019
  • Autonomous mobile robots need SLAM (simultaneous localization and mapping) to look for the location and simultaneously to make the map around the location. In order to achieve visual SLAM, it is necessary to form an algorithm that detects and extracts feature points from camera images, and gets the camera pose and 3D points of the features. In this paper, we propose MPROSAC algorithm which combines MSAC and PROSAC, and compare the performance of optimization method and the filtering method for feature-based monocular visual SLAM. Sparse Bundle Adjustment (SBA) is used for the optimization method and the extended Kalman filter is used for the filtering method.

Ultrawideband coupled relative positioning algorithm applicable to flight controller for multidrone collaboration

  • Jeonggi Yang;Soojeon Lee
    • ETRI Journal
    • /
    • 제45권5호
    • /
    • pp.758-767
    • /
    • 2023
  • In this study, we introduce a loosely coupled relative position estimation method that utilizes a decentralized ultrawideband (UWB), Global Navigation Support System and inertial navigation system for flight controllers (FCs). Key obstacles to multidrone collaboration include relative position errors and the absence of communication devices. To address this, we provide an extended Kalman filter-based algorithm and module that correct distance errors by fusing UWB data acquired through random communications. Via simulations, we confirm the feasibility of the algorithm and verify its distance error correction performance according to the amount of communications. Real-world tests confirm the algorithm's effectiveness on FCs and the potential for multidrone collaboration in real environments. This method can be used to correct relative multidrone positions during collaborative transportation and simultaneous localization and mapping applications.

GPU 가속화를 통한 이미지 특징점 기반 RGB-D 3차원 SLAM (Image Feature-Based Real-Time RGB-D 3D SLAM with GPU Acceleration)

  • 이동화;김형진;명현
    • 제어로봇시스템학회논문지
    • /
    • 제19권5호
    • /
    • pp.457-461
    • /
    • 2013
  • This paper proposes an image feature-based real-time RGB-D (Red-Green-Blue Depth) 3D SLAM (Simultaneous Localization and Mapping) system. RGB-D data from Kinect style sensors contain a 2D image and per-pixel depth information. 6-DOF (Degree-of-Freedom) visual odometry is obtained through the 3D-RANSAC (RANdom SAmple Consensus) algorithm with 2D image features and depth data. For speed up extraction of features, parallel computation is performed with GPU acceleration. After a feature manager detects a loop closure, a graph-based SLAM algorithm optimizes trajectory of the sensor and builds a 3D point cloud based map.

SLAM으로 작성한 지도 품질의 상대적/정량적 비교를 위한 방법 제안 (A New Method for Relative/Quantitative Comparison of Map Built by SLAM)

  • 권태범;장우석
    • 로봇학회논문지
    • /
    • 제9권4호
    • /
    • pp.242-249
    • /
    • 2014
  • By a SLAM (simultaneous localization and mapping) method, we get a map of an environment for autonomous navigation of a robot. In this case, we want to know how accurate the map is. Or we want to know which map is more accurate when different maps can be obtained by different SLAM methods. So, several methods for map comparison have been studied, but they have their own drawbacks. In this paper, we propose a new method which compares the accuracy or error of maps relatively and quantitatively. This method sets many corresponding points on both reference map and SLAM map, and computes the translational and rotational values of all corresponding points using least-squares solution. Analyzing the standard deviations of all translational and rotational values, we can know the error of two maps. This method can consider both local and global errors while other methods can deal with one of them, and this is verified by a series of simulations and real world experiments.

천장 조명의 위치와 방위 정보를 이용한 모노카메라와 오도메트리 정보 기반의 SLAM (Monocular Vision and Odometry-Based SLAM Using Position and Orientation of Ceiling Lamps)

  • 황서연;송재복
    • 제어로봇시스템학회논문지
    • /
    • 제17권2호
    • /
    • pp.164-170
    • /
    • 2011
  • This paper proposes a novel monocular vision-based SLAM (Simultaneous Localization and Mapping) method using both position and orientation information of ceiling lamps. Conventional approaches used corner or line features as landmarks in their SLAM algorithms, but these methods were often unable to achieve stable navigation due to a lack of reliable visual features on the ceiling. Since lamp features are usually placed some distances from each other in indoor environments, they can be robustly detected and used as reliable landmarks. We used both the position and orientation of a lamp feature to accurately estimate the robot pose. Its orientation is obtained by calculating the principal axis from the pixel distribution of the lamp area. Both corner and lamp features are used as landmarks in the EKF (Extended Kalman Filter) to increase the stability of the SLAM process. Experimental results show that the proposed scheme works successfully in various indoor environments.

Deep Convolutional Auto-encoder를 이용한 환경 변화에 강인한 장소 인식 (Condition-invariant Place Recognition Using Deep Convolutional Auto-encoder)

  • 오정현;이범희
    • 로봇학회논문지
    • /
    • 제14권1호
    • /
    • pp.8-13
    • /
    • 2019
  • Visual place recognition is widely researched area in robotics, as it is one of the elemental requirements for autonomous navigation, simultaneous localization and mapping for mobile robots. However, place recognition in changing environment is a challenging problem since a same place look different according to the time, weather, and seasons. This paper presents a feature extraction method using a deep convolutional auto-encoder to recognize places under severe appearance changes. Given database and query image sequences from different environments, the convolutional auto-encoder is trained to predict the images of the desired environment. The training process is performed by minimizing the loss function between the predicted image and the desired image. After finishing the training process, the encoding part of the structure transforms an input image to a low dimensional latent representation, and it can be used as a condition-invariant feature for recognizing places in changing environment. Experiments were conducted to prove the effective of the proposed method, and the results showed that our method outperformed than existing methods.