• 제목/요약/키워드: vision slam

검색결과 40건 처리시간 0.02초

Augmented Feature Point Initialization Method for Vision/Lidar Aided 6-DoF Bearing-Only Inertial SLAM

  • Yun, Sukchang;Lee, Byoungjin;Kim, Yeon-Jo;Lee, Young Jae;Sung, Sangkyung
    • Journal of Electrical Engineering and Technology
    • /
    • 제11권6호
    • /
    • pp.1846-1856
    • /
    • 2016
  • This study proposes a novel feature point initialization method in order to improve the accuracy of feature point positions by fusing a vision sensor and a lidar. The initialization is a process that determines three dimensional positions of feature points through two dimensional image data, which has a direct influence on performance of a 6-DoF bearing-only SLAM. Prior to the initialization, an extrinsic calibration method which estimates rotational and translational relationships between a vision sensor and lidar using multiple calibration tools was employed, then the feature point initialization method based on the estimated extrinsic calibration parameters was presented. In this process, in order to improve performance of the accuracy of the initialized feature points, an iterative automatic scaling parameter tuning technique was presented. The validity of the proposed feature point initialization method was verified in a 6-DoF bearing-only SLAM framework through an indoor and outdoor tests that compare estimation performance with the previous initialization method.

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • 제36권6호
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.

어안 이미지 기반의 움직임 추정 기법을 이용한 전방향 영상 SLAM (Omni-directional Vision SLAM using a Motion Estimation Method based on Fisheye Image)

  • 최윤원;최정원;대염염;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제20권8호
    • /
    • pp.868-874
    • /
    • 2014
  • This paper proposes a novel mapping algorithm in Omni-directional Vision SLAM based on an obstacle's feature extraction using Lucas-Kanade Optical Flow motion detection and images obtained through fish-eye lenses mounted on robots. Omni-directional image sensors have distortion problems because they use a fish-eye lens or mirror, but it is possible in real time image processing for mobile robots because it measured all information around the robot at one time. In previous Omni-Directional Vision SLAM research, feature points in corrected fisheye images were used but the proposed algorithm corrected only the feature point of the obstacle. We obtained faster processing than previous systems through this process. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we remove the feature points of the floor surface using a histogram filter, and label the candidates of the obstacle extracted. Third, we estimate the location of obstacles based on motion vectors using LKOF. Finally, it estimates the robot position using an Extended Kalman Filter based on the obstacle position obtained by LKOF and creates a map. We will confirm the reliability of the mapping algorithm using motion estimation based on fisheye images through the comparison between maps obtained using the proposed algorithm and real maps.

영상 기반 자율적인 Semantic Map 제작과 로봇 위치 지정 (Vision-based Autonomous Semantic Map Building and Robot Localization)

  • 임정훈;정승도;서일홍;최병욱
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2005년도 학술대회 논문집 정보 및 제어부문
    • /
    • pp.86-88
    • /
    • 2005
  • An autonomous semantic-map building method is proposed, with the robot localized in the semantic-map. Our semantic-map is organized by objects represented as SIFT features and vision-based relative localization is employed as a process model to implement extended Kalman filters. Thus, we expect that robust SLAM performance can be obtained even under poor conditions in which localization cannot be achieved by classical odometry-based SLAM

  • PDF

구조화된 환경에서의 가중치 템플릿 매칭을 이용한 자율 수중 로봇의 비전 기반 위치 인식 (Vision-based Localization for AUVs using Weighted Template Matching in a Structured Environment)

  • 김동훈;이동화;명현;최현택
    • 제어로봇시스템학회논문지
    • /
    • 제19권8호
    • /
    • pp.667-675
    • /
    • 2013
  • This paper presents vision-based techniques for underwater landmark detection, map-based localization, and SLAM (Simultaneous Localization and Mapping) in structured underwater environments. A variety of underwater tasks require an underwater robot to be able to successfully perform autonomous navigation, but the available sensors for accurate localization are limited. A vision sensor among the available sensors is very useful for performing short range tasks, in spite of harsh underwater conditions including low visibility, noise, and large areas of featureless topography. To overcome these problems and to a utilize vision sensor for underwater localization, we propose a novel vision-based object detection technique to be applied to MCL (Monte Carlo Localization) and EKF (Extended Kalman Filter)-based SLAM algorithms. In the image processing step, a weighted correlation coefficient-based template matching and color-based image segmentation method are proposed to improve the conventional approach. In the localization step, in order to apply the landmark detection results to MCL and EKF-SLAM, dead-reckoning information and landmark detection results are used for prediction and update phases, respectively. The performance of the proposed technique is evaluated by experiments with an underwater robot platform in an indoor water tank and the results are discussed.

Compressed Extended Kalman 필터를 이용한 야외 환경에서 주행 로봇의 위치 추정 및 지도 작성 (Simultaneous Localization & Map-building of Mobile Robot in the Outdoor Environments by Vision-based Compressed Extended Kalman Filter)

  • 윤석준;최현도;박성기;김수현;곽윤근
    • 제어로봇시스템학회논문지
    • /
    • 제12권6호
    • /
    • pp.585-593
    • /
    • 2006
  • In this paper, we propose a vision-based simultaneous localization and map-building (SLAM) algorithm. SLAM problem asks the location of mobile robot in the unknown environments. Therefore, this problem is one of the most important processes of mobile robots in the outdoor operation. To solve this problem, Extended Kalman filter (EKF) is widely used. However, this filter requires computational power (${\sim}O(N)$, N is the dimension of state vector). To reduce the computational complexity, we applied compressed extended Kalman filter (CEKF) to stereo image sequence. Moreover, because the mobile robots operate in the outdoor environments, we should estimate full d.o.f.s of mobile robot. To evaluate proposed SLAM algorithm, we performed the outdoor experiments. The experiment was performed by using new wheeled type mobile robot, Robhaz-6W. The performance results of CEKF SLAM are presented.

A Simple Framework for Indoor Monocular SLAM

  • Nguyen, Xuan-Dao;You, Bum-Jae;Oh, Sang-Rok
    • International Journal of Control, Automation, and Systems
    • /
    • 제6권1호
    • /
    • pp.62-75
    • /
    • 2008
  • Vision-based simultaneous localization and map building using a single camera, while compelling in theory, have not until recently been considered extensive in the practical realm of the real world. In this paper, we propose a simple framework for the monocular SLAM of an indoor mobile robot using natural line features. Our focus in this paper is on presenting a novel approach for modeling the landmark before integration in monocular SLAM. We also discuss data association improvement in a particle filter approach by using the feature management scheme. In addition, we take constraints between features in the environment into account for reducing estimated errors and thereby improve performance. Our experimental results demonstrate the feasibility of the proposed SLAM algorithm in real-time.

이동 로봇의 강인한 위치 추정을 통한 실내 SLAM (Robust Mobile-Robot Localization for Indoor SLAM)

  • 모세현;유동현;박종호;정길도
    • 제어로봇시스템학회논문지
    • /
    • 제22권4호
    • /
    • pp.301-306
    • /
    • 2016
  • This paper presents the results of a study for robust self-localization and indoor slam using external cameras (such as a CCTV) and odometry of mobile robot. First, a position of mobile robot was estimated by using maker and odometry. This data was then fused with camera data and odometry data using an extended kalman filter. Finally, indoor slam was realized by applying the proposed method. This was demonstrated in the actual experiment.

어안 ORB-SLAM 알고리즘을 사용한 구면 비디오로부터의 3D 맵 생성 (3D Map Construction from Spherical Video using Fisheye ORB-SLAM Algorithm)

  • 김기식;박종승
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2020년도 추계학술발표대회
    • /
    • pp.1080-1083
    • /
    • 2020
  • 본 논문에서는 구면 파노라마를 기반으로 하는 SLAM 시스템을 제안한다. Vision SLAM은 촬영하는 시야각이 넓을수록 적은 프레임으로도 주변을 빠르게 파악할 수 있고, 많은 양의 주변 데이터를 이용해 더욱 안정적인 추정이 가능하다. 구면 파노라마 비디오는 가장 화각이 넓은 영상으로, 모든 방향을 활용할 수 있기 때문에 Fisheye 영상보다 더욱 빠르게 3D 맵을 확장해나갈 수 있다. 기존의 시스템 중 Fisheye 영상을 기반으로 하는 시스템은 전면 광각만을 수용할 수 있기 때문에 구면 파노라마를 입력으로 하는 경우보다 적용 범위가 줄어들게 된다. 본 논문에서는 기존에 Fisheye 비디오를 기반으로 하는 SLAM 시스템을 구면 파노라마의 영역으로 확장하는 방법을 제안한다. 제안 방법은 카메라의 투영 모델이 요구하는 파라미터를 정확히 계산하고, Dual Fisheye Model을 통해 모든 시야각을 손실 없이 활용한다.