• 제목/요약/키워드: Simultaneous Localization And Mapping

검색결과 129건 처리시간 0.024초

미지환경에서 무인이동체의 자율주행을 위한 확률기반 위치 인식과 추적 방법 (Approaches to Probabilistic Localization and Tracking for Autonomous Mobility Robot in Unknown Environment)

  • 진태석
    • 한국산업융합학회 논문집
    • /
    • 제25권3호
    • /
    • pp.341-347
    • /
    • 2022
  • This paper presents a comparison result of two simultaneous localization and mapping (SLAM) algorithms for navigation that have been proposed in literature. The performances of Extended Kalman Filter (EKF) SLAM under Gaussian condition, FastSLAM algorithms using Rao-Blackwellised method for particle filtering are compared in terms of accuracy of state estimations for localization of a robot and mapping of its environment. The algorithms were run using the same type of robot on indoor environment. The results show that the Particle filter based FastSLAM has the better performance in terms of accuracy of localization and mapping. The experimental results are discussed and compared.

Mobile Robot Localization and Mapping using a Gaussian Sum Filter

  • Kwok, Ngai Ming;Ha, Quang Phuc;Huang, Shoudong;Dissanayake, Gamini;Fang, Gu
    • International Journal of Control, Automation, and Systems
    • /
    • 제5권3호
    • /
    • pp.251-268
    • /
    • 2007
  • A Gaussian sum filter (GSF) is proposed in this paper on simultaneous localization and mapping (SLAM) for mobile robot navigation. In particular, the SLAM problem is tackled here for cases when only bearing measurements are available. Within the stochastic mapping framework using an extended Kalman filter (EKF), a Gaussian probability density function (pdf) is assumed to describe the range-and-bearing sensor noise. In the case of a bearing-only sensor, a sum of weighted Gaussians is used to represent the non-Gaussian robot-landmark range uncertainty, resulting in a bank of EKFs for estimation of the robot and landmark locations. In our approach, the Gaussian parameters are designed on the basis of minimizing the representation error. The computational complexity of the GSF is reduced by applying the sequential probability ratio test (SPRT) to remove under-performing EKFs. Extensive experimental results are included to demonstrate the effectiveness and efficiency of the proposed techniques.

실내 환경에서 자기위치 인식을 위한 어안렌즈 기반의 천장의 특징점 모델 연구 (A Study on Fisheye Lens based Features on the Ceiling for Self-Localization)

  • 최철희;최병재
    • 한국지능시스템학회논문지
    • /
    • 제21권4호
    • /
    • pp.442-448
    • /
    • 2011
  • 이동 로봇의 위치인식 기술을 위하여 SLAM(Simultaneous Localization and Mapping)에 관한 많은 연구가 진행되고 있다. 본 논문에서는 시야각이 넓은 어안렌즈를 장착한 단일 카메라를 사용하여 천장의 특징점을 이용한 자기위치 인식에 관한 방안을 제시한다. 여기서는 어안렌즈 기반의 비전 시스템이 가지는 왜곡 영상의 보정, SIFT(Scale Invariant Feature Transform) 기반의 강인한 특징점을 추출하여 이전 영상과 이동한 영상과의 정합을 통해 최적화된 영역 함수를 도출하는 과정, 그리고 기하학적 적합모델 설계 등을 제시한다. 제안한 방법을 실험실 환경 및 복도 환경에 적용하여 그 유용성을 확인한다.

다중 카메라 시스템을 위한 전방위 Visual-LiDAR SLAM (Omni-directional Visual-LiDAR SLAM for Multi-Camera System)

  • 지샨 자비드;김곤우
    • 로봇학회논문지
    • /
    • 제17권3호
    • /
    • pp.353-358
    • /
    • 2022
  • Due to the limited field of view of the pinhole camera, there is a lack of stability and accuracy in camera pose estimation applications such as visual SLAM. Nowadays, multiple-camera setups and large field of cameras are used to solve such issues. However, a multiple-camera system increases the computation complexity of the algorithm. Therefore, in multiple camera-assisted visual simultaneous localization and mapping (vSLAM) the multi-view tracking algorithm is proposed that can be used to balance the budget of the features in tracking and local mapping. The proposed algorithm is based on PanoSLAM architecture with a panoramic camera model. To avoid the scale issue 3D LiDAR is fused with omnidirectional camera setup. The depth is directly estimated from 3D LiDAR and the remaining features are triangulated from pose information. To validate the method, we collected a dataset from the outdoor environment and performed extensive experiments. The accuracy was measured by the absolute trajectory error which shows comparable robustness in various environments.

기준 평면의 설정에 의한 확장 칼만 필터 SLAM 기반 카메라 추적 방법 (EKF SLAM-based Camera Tracking Method by Establishing the Reference Planes)

  • 남보담;홍현기
    • 한국게임학회 논문지
    • /
    • 제12권3호
    • /
    • pp.87-96
    • /
    • 2012
  • 본 논문에서는 시퀀스 상에서 확장 칼만필터(Extended Kalman Filter) 기반의 SLAM(Simultaneous Localization And Mapping) 시스템의 안정적인 카메라 추적과 재위치(re-localization) 방법이 제안된다. SLAM으로 얻어진 3차원 특징점에 들로네(Delaunay) 삼각화를 적용하여 기준(reference) 평면을 설정하며, 평면상에 존재하는 특징점의 BRISK(Binary Robust Invariant Scalable Keypoints) 기술자(descriptor)를 생성한다. 기존 확장 칼만필터의 오차가 누적되는 경우를 판단하여 기준 평면의 호모그래피로부터 카메라 정보를 해석한다. 또한 카메라가 급격하게 이동해서 특징점 추적이 실패하면, 저장된 강건한 기술자 정보를 매칭하여 카메라의 위치를 다시 추정한다.

이동로봇을 위한 Sonar Salient 형상과 선 형상을 이용한 EKF 기반의 SLAM (EKF-based SLAM Using Sonar Salient Feature and Line Feature for Mobile Robots)

  • 허영진;임종환;이세진
    • 한국정밀공학회지
    • /
    • 제28권10호
    • /
    • pp.1174-1180
    • /
    • 2011
  • Not all line or point features capable of being extracted by sonar sensors from cluttered home environments are useful for simultaneous localization and mapping (SLAM) due to their ambiguity because it is difficult to determine the correspondence of line or point features with previously registered feature. Confused line and point features in cluttered environments leads to poor SLAM performance. We introduce a sonar feature structure suitable for a cluttered environment and the extended Kalman filter (EKF)-based SLAM scheme. The reliable line feature is expressed by its end points and engaged togather in EKF SLAM to overcome the geometric limits and maintain the map consistency. Experimental results demonstrate the validity and robustness of the proposed method.

동적 도시 환경에서 의미론적 시각적 장소 인식 (Semantic Visual Place Recognition in Dynamic Urban Environment)

  • 사바 아르샤드;김곤우
    • 로봇학회논문지
    • /
    • 제17권3호
    • /
    • pp.334-338
    • /
    • 2022
  • In visual simultaneous localization and mapping (vSLAM), the correct recognition of a place benefits in relocalization and improved map accuracy. However, its performance is significantly affected by the environmental conditions such as variation in light, viewpoints, seasons, and presence of dynamic objects. This research addresses the problem of feature occlusion caused by interference of dynamic objects leading to the poor performance of visual place recognition algorithm. To overcome the aforementioned problem, this research analyzes the role of scene semantics in correct detection of a place in challenging environments and presents a semantics aided visual place recognition method. Semantics being invariant to viewpoint changes and dynamic environment can improve the overall performance of the place matching method. The proposed method is evaluated on the two benchmark datasets with dynamic environment and seasonal changes. Experimental results show the improved performance of the visual place recognition method for vSLAM.

무인 구조물 검사를 위한 자율 비행 시스템 (Autonomous Navigation System of an Unmanned Aerial Vehicle for Structural Inspection)

  • 정성욱;최덕규;송승원;명현
    • 로봇학회논문지
    • /
    • 제16권3호
    • /
    • pp.216-222
    • /
    • 2021
  • Recently, various robots are being used for the purpose of structural inspection or safety diagnosis, and their needs are also rising rapidly. Among the structural inspection using robots, a lot of researches has recently been conducted on inspection of various facilities and structures using an unmanned aerial vehicle (UAV). However, since GNSS (Global Navigation Satellite System) signals cannot be received in an environment near or below structures, the operation of UAVs has been done manually. For a stable autonomous flight without GNSS signals, additional technologies are required. This paper proposes the autonomous flight system for structural inspection consisting of simultaneous localization and mapping (SLAM), path planning, and controls. The experiments were conducted on an actual large bridge to verify the feasibility of the system, and especially the performance of the proposed SLAM algorithm was compared through comparative analysis with the state-of-the-art algorithms.

구조화된 환경에서의 가중치 템플릿 매칭을 이용한 자율 수중 로봇의 비전 기반 위치 인식 (Vision-based Localization for AUVs using Weighted Template Matching in a Structured Environment)

  • 김동훈;이동화;명현;최현택
    • 제어로봇시스템학회논문지
    • /
    • 제19권8호
    • /
    • pp.667-675
    • /
    • 2013
  • This paper presents vision-based techniques for underwater landmark detection, map-based localization, and SLAM (Simultaneous Localization and Mapping) in structured underwater environments. A variety of underwater tasks require an underwater robot to be able to successfully perform autonomous navigation, but the available sensors for accurate localization are limited. A vision sensor among the available sensors is very useful for performing short range tasks, in spite of harsh underwater conditions including low visibility, noise, and large areas of featureless topography. To overcome these problems and to a utilize vision sensor for underwater localization, we propose a novel vision-based object detection technique to be applied to MCL (Monte Carlo Localization) and EKF (Extended Kalman Filter)-based SLAM algorithms. In the image processing step, a weighted correlation coefficient-based template matching and color-based image segmentation method are proposed to improve the conventional approach. In the localization step, in order to apply the landmark detection results to MCL and EKF-SLAM, dead-reckoning information and landmark detection results are used for prediction and update phases, respectively. The performance of the proposed technique is evaluated by experiments with an underwater robot platform in an indoor water tank and the results are discussed.

동시적 위치 추정 및 지도 작성에서 Variational Autoencoder 를 이용한 루프 폐쇄 검출 (Loop Closure Detection Using Variational Autoencoder in Simultaneous Localization and Mapping)

  • 신동원;호요성
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송∙미디어공학회 2017년도 하계학술대회
    • /
    • pp.250-253
    • /
    • 2017
  • 본 논문에서는 동시적 위치 추정 및 지도 작성 (simultaneous localization and mapping)에서 루프 폐쇄 검출을 딥러닝 방법의 일종인 variational autoencoder 를 이용하여 수행하는 방법에 대해 살펴본다. Autoencoder 는 비감독 학습 방법의 일종으로 입력 영상이 신경망을 통과하여 얻은 출력 영상과 동일하도록 신경망을 학습시키는 모델이다. 이 때 autoencoder 중간의 병목 지역을 통과함에도 불구하고 입력과 동일한 영상을 계산해야 하는 제약조건이 있기 때문에 이는 차원 축소나 데이터 추상화의 목적으로 많이 사용된다. 여기서 한 단계 더 발전된 variational autoencoder 는 기존의 autoencoder 가 가진 단점인 입력 변수의 분포와 잠재 변수의 분포 사이에 상관관계가 없다는 단점을 해결하기 위해 Kullback-Leibler divergence 를 활용한 손실 함수를 정의하여 사용했다. 실험결과에서는 루프 폐쇄 검출에서 많이 사용되는 City-Centre 와 New College 데이터 집합을 사용하여 평가하였으며 루프 폐쇄 검출의 결과는 정밀도와 재현율을 계산하여 나타냈다.

  • PDF