• Title/Summary/Keyword: SLAM Technology

Search Result 83, Processing Time 0.027 seconds

Geographical Group-based FastSLAM Algorithm for Maintenance of the Diversity of Particles (파티클 다양성 유지를 위한 지역적 그룹 기반 FastSLAM 알고리즘)

  • Jang, June-Young;Ji, Sang-Hoon;Park, Hong Seong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.10
    • /
    • pp.907-914
    • /
    • 2013
  • A FastSLAM is an algorithm for SLAM (Simultaneous Localization and Mapping) using a Rao-Blackwellized particle filter and its performance is known to degenerate over time due to the loss of particle diversity, mainly caused by the particle depletion problem in the resampling phase. In this paper, the GeSPIR (Geographically Stratified Particle Information-based Resampling) technique is proposed to solve the particle depletion problem. The proposed algorithm consists of the following four steps : the first step involves the grouping of particles divided into K regions, the second obtaining the normal weight of each region, the third specifying the protected areas, and the fourth resampling using regional equalization weight. Simulations show that the proposed algorithm obtains lower RMS errors in both robot and feature positions than the conventional FastSLAM algorithm.

Line-Based SLAM Using Vanishing Point Measurements Loss Function (소실점 정보의 Loss 함수를 이용한 특징선 기반 SLAM)

  • Hyunjun Lim;Hyun Myung
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.3
    • /
    • pp.330-336
    • /
    • 2023
  • In this paper, a novel line-based simultaneous localization and mapping (SLAM) using a loss function of vanishing point measurements is proposed. In general, the Huber norm is used as a loss function for point and line features in feature-based SLAM. The proposed loss function of vanishing point measurements is based on the unit sphere model. Because the point and line feature measurements define the reprojection error in the image plane as a residual, linear loss functions such as the Huber norm is used. However, the typical loss functions are not suitable for vanishing point measurements with unbounded problems. To tackle this problem, we propose a loss function for vanishing point measurements. The proposed loss function is based on unit sphere model. Finally, we prove the validity of the loss function for vanishing point through experiments on a public dataset.

Experiments of Unmanned Underwater Vehicle's 3 Degrees of Freedom Motion Applied the SLAM based on the Unscented Kalman Filter (무인 잠수정 3자유도 운동 실험에 대한 무향 칼만 필터 기반 SLAM기법 적용)

  • Hwang, A-Rom;Seong, Woo-Jae;Jun, Bong-Huan;Lee, Pan-Mook
    • Journal of Ocean Engineering and Technology
    • /
    • v.23 no.2
    • /
    • pp.58-68
    • /
    • 2009
  • The increased use of unmanned underwater vehicles (UUV) has led to the development of alternative navigational methods that do not employ acoustic beacons and dead reckoning sensors. This paper describes a simultaneous localization and mapping (SLAM) scheme that uses range sonars mounted on a small UUV. A SLAM scheme is an alternative navigation method for measuring the environment through which the vehicle is passing and providing the relative position of the UUV. A technique for a SLAM algorithm that uses several ranging sonars is presented. This technique utilizes an unscented Kalman filter to estimate the locations of the UUV and surrounding objects. In order to work efficiently, the nearest neighbor standard filter is introduced as the data association algorithm in the SLAM for associating the stored targets returned by the sonar at each time step. The proposed SLAM algorithm was tested by experiments under various three degrees of freedom motion conditions. The results of these experiments showed that the proposed SLAM algorithm was capable of estimating the position of the UUV and the surrounding objects and demonstrated that the algorithm will perform well in various environments.

Three-dimensional Map Construction of Indoor Environment Based on RGB-D SLAM Scheme

  • Huang, He;Weng, FuZhou;Hu, Bo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.2
    • /
    • pp.45-53
    • /
    • 2019
  • RGB-D SLAM (Simultaneous Localization and Mapping) refers to the technology of using deep camera as a visual sensor for SLAM. In view of the disadvantages of high cost and indefinite scale in the construction of maps for laser sensors and traditional single and binocular cameras, a method for creating three-dimensional map of indoor environment with deep environment data combined with RGB-D SLAM scheme is studied. The method uses a mobile robot system equipped with a consumer-grade RGB-D sensor (Kinect) to acquire depth data, and then creates indoor three-dimensional point cloud maps in real time through key technologies such as positioning point generation, closed-loop detection, and map construction. The actual field experiment results show that the average error of the point cloud map created by the algorithm is 0.0045m, which ensures the stability of the construction using deep data and can accurately create real-time three-dimensional maps of indoor unknown environment.

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.

Analysis of Applicability of Visual SLAM for Indoor Positioning in the Building Construction Site (Visual SLAM의 건설현장 실내 측위 활용성 분석)

  • Kim, Taejin;Park, Jiwon;Lee, Byoungmin;Bae, Kangmin;Yoon, Sebeen;Kim, Taehoon
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2022.11a
    • /
    • pp.47-48
    • /
    • 2022
  • The positioning technology that measures the position of a person or object is a key technology to deal with the location of the real coordinate system or converge the real and virtual worlds, such as digital twins, augmented reality, virtual reality, and autonomous driving. In estimating the location of a person or object at an indoor construction site, there are restrictions that it is impossible to receive location information from the outside, the communication infrastructure is insufficient, and it is difficult to install additional devices. Therefore, this study tested the direct sparse odometry algorithm, one of the visual Simultaneous Localization and Mapping (vSLAM) that estimate the current location and surrounding map using only image information, at an indoor construction site and analyzed its applicability as an indoor positioning technology. As a result, it was found that it is possible to properly estimate the surrounding map and the current location even in the indoor construction site, which has relatively few feature points. The results of this study can be used as reference data for researchers related to indoor positioning technology for construction sites in the future.

  • PDF

Laser Image SLAM based on Image Matching for Navigation of a Mobile Robot (이동 로봇 주행을 위한 이미지 매칭에 기반한 레이저 영상 SLAM)

  • Choi, Yun Won;Kim, Kyung Dong;Choi, Jung Won;Lee, Suk Gyu
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.30 no.2
    • /
    • pp.177-184
    • /
    • 2013
  • This paper proposes an enhanced Simultaneous Localization and Mapping (SLAM) algorithm based on matching laser image and Extended Kalman Filter (EKF). In general, laser information is one of the most efficient data for localization of mobile robots and is more accurate than encoder data. For localization of a mobile robot, moving distance information of a robot is often obtained by encoders and distance information from the robot to landmarks is estimated by various sensors. Though encoder has high resolution, it is difficult to estimate current position of a robot precisely because of encoder error caused by slip and backlash of wheels. In this paper, the position and angle of the robot are estimated by comparing laser images obtained from laser scanner with high accuracy. In addition, Speeded Up Robust Features (SURF) is used for extracting feature points at previous laser image and current laser image by comparing feature points. As a result, the moving distance and heading angle are obtained based on information of available points. The experimental results using the proposed laser slam algorithm show effectiveness for the SLAM of robot.

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

Depth-hybrid speeded-up robust features (DH-SURF) for real-time RGB-D SLAM

  • Lee, Donghwa;Kim, Hyungjin;Jung, Sungwook;Myung, Hyun
    • Advances in robotics research
    • /
    • v.2 no.1
    • /
    • pp.33-44
    • /
    • 2018
  • This paper presents a novel feature detection algorithm called depth-hybrid speeded-up robust features (DH-SURF) augmented by depth information in the speeded-up robust features (SURF) algorithm. In the keypoint detection part of classical SURF, the standard deviation of the Gaussian kernel is varied for its scale-invariance property, resulting in increased computational complexity. We propose a keypoint detection method with less variation of the standard deviation by using depth data from a red-green-blue depth (RGB-D) sensor. Our approach maintains a scale-invariance property while reducing computation time. An RGB-D simultaneous localization and mapping (SLAM) system uses a feature extraction method and depth data concurrently; thus, the system is well-suited for showing the performance of the DH-SURF method. DH-SURF was implemented on a central processing unit (CPU) and a graphics processing unit (GPU), respectively, and was validated through the real-time RGB-D SLAM.

UAV and LiDAR SLAM Combination Effectiveness Review for Indoor and Outdoor Reverse Engineering of Multi-Story Building (복층 건물 실내외 역설계를 위한 UAV 및 LiDAR SLAM 조합 효용성 검토)

  • Kang, Joon-Oh;Lee, Yong-Chang
    • Journal of Cadastre & Land InformatiX
    • /
    • v.50 no.2
    • /
    • pp.69-79
    • /
    • 2020
  • TRecently, smart cities that solve various problems in cities based on IoT technology are in the spotlight. In particular, cases of BIM application for smooth management of construction and maintenance are increasing, and spatial information is converted into 3D data through convergence technology and used for safety diagnosis. The purpose of this study is to create and combine point clouds of a multi-story building by using a ground laser scanner and a handheld LiDAR SLAM among UAV and LiDAR equipment, supplementing the Occluded area and disadvantages of each technology, examine the effectiveness of indoor and outdoor reverse design by observing shape reproduction and accuracy. As a result of the review, it was confirmed that the coordinate accuracy of the data was improved by creating and combining the indoor and outdoor point clouds of the multi-story building using three technologies. In particular, by supplementing the shortcomings of each technology, the completeness of the shape reproduction of the building was improved, the Occluded area and boundary were clearly distinguished, and the effectiveness of reverse engineering was verified.