• Title/Summary/Keyword: 3D SLAM

Search Result 50, Processing Time 0.024 seconds

Image Feature-Based Real-Time RGB-D 3D SLAM with GPU Acceleration (GPU 가속화를 통한 이미지 특징점 기반 RGB-D 3차원 SLAM)

  • Lee, Donghwa;Kim, Hyongjin;Myung, Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.5
    • /
    • pp.457-461
    • /
    • 2013
  • This paper proposes an image feature-based real-time RGB-D (Red-Green-Blue Depth) 3D SLAM (Simultaneous Localization and Mapping) system. RGB-D data from Kinect style sensors contain a 2D image and per-pixel depth information. 6-DOF (Degree-of-Freedom) visual odometry is obtained through the 3D-RANSAC (RANdom SAmple Consensus) algorithm with 2D image features and depth data. For speed up extraction of features, parallel computation is performed with GPU acceleration. After a feature manager detects a loop closure, a graph-based SLAM algorithm optimizes trajectory of the sensor and builds a 3D point cloud based map.

Identifying Considerations for Developing SLAM-based Mobile Scan Backpack System for Rapid Building Scanning (신속한 건축물 스캔을 위한 SLAM기반 이동형 스캔백팩 시스템 개발 고려사항 도출)

  • Kang, Tae-Wook
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.3
    • /
    • pp.312-320
    • /
    • 2020
  • 3D scanning began in the field of manufacturing. In the construction field, a BIM (Building Information Modeling)-based 3D modeling environment was developed and used for the overall construction, such as factory prefabrication, structure construction inspection, plant facility, bridge, tunnel structure inspection using 3D scanning technology. LiDARs have higher accuracy and density than mobile scanners but require longer registration times and data processing. On the other hand, in interior building space management, relatively high accuracy is not needed, and the user can conveniently move with a mobile scan system. This study derives considerations for the development of Simultaneous Localization and Mapping (SLAM)-based Scan Backpack systems that move freely and support real-time point cloud registration. This paper proposes the mobile scan system, framework, and component structure to derive the considerations and improve scan productivity. Prototype development was carried out in two stages, SLAM and ScanBackpack, to derive the considerations and analyze the results.

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

Considerations for Developing a SLAM System for Real-time Remote Scanning of Building Facilities (건축물 실시간 원격 스캔을 위한 SLAM 시스템 개발 시 고려사항)

  • Kang, Tae-Wook
    • Journal of KIBIM
    • /
    • v.10 no.1
    • /
    • pp.1-8
    • /
    • 2020
  • In managing building facilities, spatial information is the basic data for decision making. However, the method of acquiring spatial information is not easy. In many cases, the site and drawings are often different due to changes in facilities and time after construction. In this case, the site data should be scanned to obtain spatial information. The scan data actually contains spatial information, which is a great help in making space related decisions. However, to obtain scan data, an expensive LiDAR (Light Detection and Ranging) device must be purchased, and special software for processing data obtained from the device must be available.Recently, SLAM (Simultaneous localization and mapping), an advanced map generation technology, has been spreading in the field of robotics. Using SLAM, 3D spatial information can be obtained quickly in real time without a separate matching process. This study develops and tests whether SLAM technology can be used to obtain spatial information for facility management. This draws considerations for developing a SLAM device for real-time remote scanning for facility management. However, this study focuses on the system development method that acquires spatial information necessary for facility management through SLAM technology. To this end, we develop a prototype, analyze the pros and cons, and then suggest considerations for developing a SLAM system.

3D Map Construction from Spherical Video using Fisheye ORB-SLAM Algorithm (어안 ORB-SLAM 알고리즘을 사용한 구면 비디오로부터의 3D 맵 생성)

  • Kim, Ki-Sik;Park, Jong-Seung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.1080-1083
    • /
    • 2020
  • 본 논문에서는 구면 파노라마를 기반으로 하는 SLAM 시스템을 제안한다. Vision SLAM은 촬영하는 시야각이 넓을수록 적은 프레임으로도 주변을 빠르게 파악할 수 있고, 많은 양의 주변 데이터를 이용해 더욱 안정적인 추정이 가능하다. 구면 파노라마 비디오는 가장 화각이 넓은 영상으로, 모든 방향을 활용할 수 있기 때문에 Fisheye 영상보다 더욱 빠르게 3D 맵을 확장해나갈 수 있다. 기존의 시스템 중 Fisheye 영상을 기반으로 하는 시스템은 전면 광각만을 수용할 수 있기 때문에 구면 파노라마를 입력으로 하는 경우보다 적용 범위가 줄어들게 된다. 본 논문에서는 기존에 Fisheye 비디오를 기반으로 하는 SLAM 시스템을 구면 파노라마의 영역으로 확장하는 방법을 제안한다. 제안 방법은 카메라의 투영 모델이 요구하는 파라미터를 정확히 계산하고, Dual Fisheye Model을 통해 모든 시야각을 손실 없이 활용한다.

A Study on 3D Indoor mapping for as-built BIM creation by using Graph-based SLAM (준공 BIM 구축을 위한 Graph-based SLAM 기반의 실내공간 3차원 지도화 연구)

  • Jung, Jaehoon;Yoon, Sanghyun;Cyrill, Stachniss;Heo, Joon
    • Korean Journal of Construction Engineering and Management
    • /
    • v.17 no.3
    • /
    • pp.32-42
    • /
    • 2016
  • In Korea, the absence of BIM use in existing civil structures and buildings is driving a demand for as-built BIM. As-built BIMs are often created using laser scanners that provide dense 3D point cloud data. Conventional static laser scanning approaches often suffer from limitations in their operability due to the difficulties in moving the equipment, the selection of scanning location, and the requirement of placing targets or extracting tie points for registration of each scanned point cloud. This paper aims at reducing the manual effort using a kinematic 3D laser scanning system based on graph-based simultaneous localization and mapping (SLAM) for continuous indoor mapping. The robotic platform carries three 2D laser scanners: the front scanner is mounted horizontally to compute the robot's trajectory and to build the SLAM graph; the other two scanners are mounted vertically to scan the profiles of surrounding environments. To reduce the accumulated error in the trajectory of the platform through loop closures, the graph-based SLAM system incorporates AdaBoost loop closure approach, which is particularly suitable for the developed multi-scanner system providing more features than the single-scanner system for training. We implemented the proposed method and evaluated it in two indoor test sites. Our experimental results show that the false positive rate was reduced by 13.6% and 7.9% for the two dataset. Finally, the 2D and 3D mapping results of the two test sites confirmed the effectiveness of the proposed graph-based SLAM.

3D Simultaneous Localization and Map Building (SLAM) using a 2D Laser Range Finder based on Vertical/Horizontal Planar Polygons (2차원 레이저 거리계를 이용한 수직/수평 다각평면 기반의 위치인식 및 3차원 지도제작)

  • Lee, Seungeun;Kim, Byung-Kook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.11
    • /
    • pp.1153-1163
    • /
    • 2014
  • An efficient 3D SLAM (Simultaneous Localization and Map Building) method is developed for urban building environments using a tilted 2D LRF (Laser Range Finder), in which a 3D map is composed of perpendicular/horizontal planar polygons. While the mobile robot is moving, from the LRF scan distance data in each scan period, line segments on the scan plane are successively extracted. We propose an "expected line segment" concept for matching: to add each of these scan line segments to the most suitable line segment group for each perpendicular/horizontal planar polygon in the 3D map. After performing 2D localization to determine the pose of the mobile robot, we construct updated perpendicular/horizontal infinite planes and then determine their boundaries to obtain the perpendicular/horizontal planar polygons which constitute our 3D map. Finally, the proposed SLAM algorithm is validated via extensive simulations and experiments.

Omni-directional Visual-LiDAR SLAM for Multi-Camera System (다중 카메라 시스템을 위한 전방위 Visual-LiDAR SLAM)

  • Javed, Zeeshan;Kim, Gon-Woo
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.353-358
    • /
    • 2022
  • Due to the limited field of view of the pinhole camera, there is a lack of stability and accuracy in camera pose estimation applications such as visual SLAM. Nowadays, multiple-camera setups and large field of cameras are used to solve such issues. However, a multiple-camera system increases the computation complexity of the algorithm. Therefore, in multiple camera-assisted visual simultaneous localization and mapping (vSLAM) the multi-view tracking algorithm is proposed that can be used to balance the budget of the features in tracking and local mapping. The proposed algorithm is based on PanoSLAM architecture with a panoramic camera model. To avoid the scale issue 3D LiDAR is fused with omnidirectional camera setup. The depth is directly estimated from 3D LiDAR and the remaining features are triangulated from pose information. To validate the method, we collected a dataset from the outdoor environment and performed extensive experiments. The accuracy was measured by the absolute trajectory error which shows comparable robustness in various environments.

Probabilistic Object Recognition in a Sequence of 3D Images (연속된 3차원 영상에서의 통계적 물체인식)

  • Jang Dae-Sik;Rhee Yang-Won;Sheng Guo-Rui
    • KSCI Review
    • /
    • v.14 no.1
    • /
    • pp.241-248
    • /
    • 2006
  • The recognition of a relatively big and rarely movable object. such as refrigerator and air conditioner, etc. is necessary because these objects can be crucial global stable features of Simultaneous Localization and Map building(SLAM) in the indoor environment. In this paper. we propose a novel method to recognize these big objects using a sequence of 3D scenes. The particles representing an object to be recognized are scattered to the environment and then the probability of each particles is calculated by the matching test with 3D lines of the environment. Based on the probability and degree of convergence of particles, we can recognize the object in the environment and the pose of object is also estimated. The experimental results show the feasibility of incremental object recognition based on particle filtering and the application to SLAM

  • PDF

EKF SLAM-based Camera Tracking Method by Establishing the Reference Planes (기준 평면의 설정에 의한 확장 칼만 필터 SLAM 기반 카메라 추적 방법)

  • Nam, Bo-Dam;Hong, Hyun-Ki
    • Journal of Korea Game Society
    • /
    • v.12 no.3
    • /
    • pp.87-96
    • /
    • 2012
  • This paper presents a novel EKF(Extended Kalman Filter) based SLAM(Simultaneous Localization And Mapping) system for stable camera tracking and re-localization. The obtained 3D points by SLAM are triangulated using Delaunay triangulation to establish a reference plane, and features are described by BRISK(Binary Robust Invariant Scalable Keypoints). The proposed method estimates the camera parameters from the homography of the reference plane when the tracking errors of EKF SLAM are much accumulated. Using the robust descriptors over sequence enables us to re-localize the camera position for matching over sequence even though the camera is moved abruptly.