• Title/Summary/Keyword: SLAM algorithm

Search Result 104, Processing Time 0.029 seconds

A Practical FastSLAM Implementation Method using an Infrared Camera for Indoor Environments (실내 환경에서 Infrared 카메라를 이용한 실용적 FastSLAM 구현 방법)

  • Zhang, Hairong;Lee, Heon-Cheol;Lee, Beom-Hee
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.305-311
    • /
    • 2009
  • FastSLAM is a factored solution to SLAM problem using a Rao-Blackwellized particle filter. In this paper, we propose a practical FastSLAM implementation method using an infrared camera for indoor environments. The infrared camera is equipped on a Pioneer3 robot and looks upward direction to the ceiling which has infrared tags with the same height. The infrared tags are detected with theinfrared camera as measurements, and the Nearest Neighbor method is used to solve the unknown data association problem. The global map is successfully built and the robot pose is predicted in real time by the FastSLAM2.0 algorithm. The experiment result shows the accuracy and robustness of the proposed method in practical indoor environment.

  • PDF

An Improved FastSLAM Algorithm using Fitness Sharing Technique (적합도 공유 기법을 적용한 향상된 FastSLAM 알고리즘)

  • Kwon, Oh-Sung;Hyeon, Byeong-Yong;Seo, Ki-Sung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.4
    • /
    • pp.487-493
    • /
    • 2012
  • SLAM(Simultaneous Localization And Mapping) is a technique used by robots and autonomous vehicles to build up a map within an unknown environment and estimate a place of robot. FastSLAM(A Factored Solution to the SLAM) is one of representative method of SLAM, which is based on particle filter and extended Kalman filter. However it is suffered from loss of particle diversity. In this paper, new approach using fitness sharing is proposed to supplement loss of particle diversity, compared and analyzed with existing methods.

Unmanned Aerial Vehicle Recovery Using a Simultaneous Localization and Mapping Algorithm without the Aid of Global Positioning System

  • Lee, Chang-Hun;Tahk, Min-Jea
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.11 no.2
    • /
    • pp.98-109
    • /
    • 2010
  • This paper deals with a new method of unmanned aerial vehicle (UAV) recovery when a UAV fails to get a global positioning system (GPS) signal at an unprepared site. The proposed method is based on the simultaneous localization and mapping (SLAM) algorithm. It is a process by which a vehicle can build a map of an unknown environment and simultaneously use this map to determine its position. Extensive research on SLAM algorithms proves that the error in the map reaches a lower limit, which is a function of the error that existed when the first observation was made. For this reason, the proposed method can help an inertial navigation system to prevent its error of divergence with regard to the vehicle position. In other words, it is possible that a UAV can navigate with reasonable positional accuracy in an unknown environment without the aid of GPS. This is the main idea of the present paper. Especially, this paper focuses on path planning that maximizes the discussed ability of a SLAM algorithm. In this work, a SLAM algorithm based on extended Kalman filter is used. For simplicity's sake, a blimp-type of UAV model is discussed and three-dimensional pointed-shape landmarks are considered. Finally, the proposed method is evaluated by a number of simulations.

Integrated Navigation Algorithm using Velocity Incremental Vector Approach with ORB-SLAM and Inertial Measurement (속도증분벡터를 활용한 ORB-SLAM 및 관성항법 결합 알고리즘 연구)

  • Kim, Yeonjo;Son, Hyunjin;Lee, Young Jae;Sung, Sangkyung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.68 no.1
    • /
    • pp.189-198
    • /
    • 2019
  • In recent years, visual-inertial odometry(VIO) algorithms have been extensively studied for the indoor/urban environments because it is more robust to dynamic scenes and environment changes. In this paper, we propose loosely coupled(LC) VIO algorithm that utilizes the velocity vectors from both visual odometry(VO) and inertial measurement unit(IMU) as a filter measurement of Extended Kalman filter. Our approach improves the estimation performance of a filter without adding extra sensors while maintaining simple integration framework, which treats VO as a black box. For the VO algorithm, we employed a fundamental part of the ORB-SLAM, which uses ORB features. We performed an outdoor experiment using an RGB-D camera to evaluate the accuracy of the presented algorithm. Also, we evaluated our algorithm with the public dataset to compare with other visual navigation systems.

Laser Image SLAM based on Image Matching for Navigation of a Mobile Robot (이동 로봇 주행을 위한 이미지 매칭에 기반한 레이저 영상 SLAM)

  • Choi, Yun Won;Kim, Kyung Dong;Choi, Jung Won;Lee, Suk Gyu
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.30 no.2
    • /
    • pp.177-184
    • /
    • 2013
  • This paper proposes an enhanced Simultaneous Localization and Mapping (SLAM) algorithm based on matching laser image and Extended Kalman Filter (EKF). In general, laser information is one of the most efficient data for localization of mobile robots and is more accurate than encoder data. For localization of a mobile robot, moving distance information of a robot is often obtained by encoders and distance information from the robot to landmarks is estimated by various sensors. Though encoder has high resolution, it is difficult to estimate current position of a robot precisely because of encoder error caused by slip and backlash of wheels. In this paper, the position and angle of the robot are estimated by comparing laser images obtained from laser scanner with high accuracy. In addition, Speeded Up Robust Features (SURF) is used for extracting feature points at previous laser image and current laser image by comparing feature points. As a result, the moving distance and heading angle are obtained based on information of available points. The experimental results using the proposed laser slam algorithm show effectiveness for the SLAM of robot.

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

Implementation of Camera-Based Autonomous Driving Vehicle for Indoor Delivery using SLAM (SLAM을 이용한 카메라 기반의 실내 배송용 자율주행 차량 구현)

  • Kim, Yu-Jung;Kang, Jun-Woo;Yoon, Jung-Bin;Lee, Yu-Bin;Baek, Soo-Whang
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.4
    • /
    • pp.687-694
    • /
    • 2022
  • In this paper, we proposed an autonomous vehicle platform that delivers goods to a designated destination based on the SLAM (Simultaneous Localization and Mapping) map generated indoors by applying the Visual SLAM technology. To generate a SLAM map indoors, a depth camera for SLAM map generation was installed on the top of a small autonomous vehicle platform, and a tracking camera was installed for accurate location estimation in the SLAM map. In addition, a convolutional neural network (CNN) was used to recognize the label of the destination, and the driving algorithm was applied to accurately arrive at the destination. A prototype of an indoor delivery autonomous vehicle was manufactured, and the accuracy of the SLAM map was verified and a destination label recognition experiment was performed through CNN. As a result, the suitability of the autonomous driving vehicle implemented by increasing the label recognition success rate for indoor delivery purposes was verified.

Omni-directional Vision SLAM using a Motion Estimation Method based on Fisheye Image (어안 이미지 기반의 움직임 추정 기법을 이용한 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Dai, Yanyan;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.8
    • /
    • pp.868-874
    • /
    • 2014
  • This paper proposes a novel mapping algorithm in Omni-directional Vision SLAM based on an obstacle's feature extraction using Lucas-Kanade Optical Flow motion detection and images obtained through fish-eye lenses mounted on robots. Omni-directional image sensors have distortion problems because they use a fish-eye lens or mirror, but it is possible in real time image processing for mobile robots because it measured all information around the robot at one time. In previous Omni-Directional Vision SLAM research, feature points in corrected fisheye images were used but the proposed algorithm corrected only the feature point of the obstacle. We obtained faster processing than previous systems through this process. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we remove the feature points of the floor surface using a histogram filter, and label the candidates of the obstacle extracted. Third, we estimate the location of obstacles based on motion vectors using LKOF. Finally, it estimates the robot position using an Extended Kalman Filter based on the obstacle position obtained by LKOF and creates a map. We will confirm the reliability of the mapping algorithm using motion estimation based on fisheye images through the comparison between maps obtained using the proposed algorithm and real maps.

A Practical Solution toward SLAM in Indoor environment Based on Visual Objects and Robust Sonar Features (가정환경을 위한 실용적인 SLAM 기법 개발 : 비전 센서와 초음파 센서의 통합)

  • Ahn, Sung-Hwan;Choi, Jin-Woo;Choi, Min-Yong;Chung, Wan-Kyun
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.1
    • /
    • pp.25-35
    • /
    • 2006
  • Improving practicality of SLAM requires various sensors to be fused effectively in order to cope with uncertainty induced from both environment and sensors. In this case, combining sonar and vision sensors possesses numerous advantages of economical efficiency and complementary cooperation. Especially, it can remedy false data association and divergence problem of sonar sensors, and overcome low frequency SLAM update caused by computational burden and weakness in illumination changes of vision sensors. In this paper, we propose a SLAM method to join sonar sensors and stereo camera together. It consists of two schemes, extracting robust point and line features from sonar data and recognizing planar visual objects using multi-scale Harris corner detector and its SIFT descriptor from pre-constructed object database. And fusing sonar features and visual objects through EKF-SLAM can give correct data association via object recognition and high frequency update via sonar features. As a result, it can increase robustness and accuracy of SLAM in indoor environment. The performance of the proposed algorithm was verified by experiments in home -like environment.

  • PDF