• Title/Summary/Keyword: vision slam

Search Result 41, Processing Time 0.025 seconds

ARVisualizer : A Markerless Augmented Reality Approach for Indoor Building Information Visualization System

  • Kim, Albert Hee-Kwan;Cho, Hyeon-Dal
    • Spatial Information Research
    • /
    • v.16 no.4
    • /
    • pp.455-465
    • /
    • 2008
  • Augmented reality (AR) has tremendous potential in visualizing geospatial information, especially on the actual physical scenes. However, to utilize augmented reality in mobile system, many researches have undergone with GPS or ubiquitous marker based approaches. Although there are several papers written with vision based markerless tracking, previous approaches provide fairly good results only in largely under "controlled environments." Localization and tracking of current position become more complex problem when it is used in indoor environments. Many proposed Radio Frequency (RF) based tracking and localization. However, it does cause deployment problems of large RF-based sensors and readers. In this paper, we present a noble markerless AR approach for indoor (possible outdoor, too) navigation system only using monoSLAM (Monocular Simultaneous Localization and Map building) algorithm to full-fill our grand effort to develop mobile seamless indoor/outdoor u-GIS system. The paper briefly explains the basic SLAM algorithm, then the implementation of our system.

  • PDF

Real-time Simultaneous Localization and Mapping (SLAM) for Vision-based Autonomous Navigation (영상기반 자동항법을 위한 실시간 위치인식 및 지도작성)

  • Lim, Hyon;Lim, Jongwoo;Kim, H. Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.39 no.5
    • /
    • pp.483-489
    • /
    • 2015
  • In this paper, we propose monocular visual simultaneous localization and mapping (SLAM) in the large-scale environment. The proposed method continuously computes the current 6-DoF camera pose and 3D landmarks position from video input. The proposed method successfully builds consistent maps from challenging outdoor sequences using a monocular camera as the only sensor. By using a binary descriptor and metric-topological mapping, the system demonstrates real-time performance on a large-scale outdoor dataset without utilizing GPUs or reducing input image size. The effectiveness of the proposed method is demonstrated on various challenging video sequences.

Loop Closure in a Line-based SLAM (직선기반 SLAM에서의 루프결합)

  • Zhang, Guoxuan;Suh, Il-Hong
    • The Journal of Korea Robotics Society
    • /
    • v.7 no.2
    • /
    • pp.120-128
    • /
    • 2012
  • The loop closure problem is one of the most challenging issues in the vision-based simultaneous localization and mapping community. It requires the robot to recognize a previously visited place from current camera measurements. While the loop closure often relies on visual bag-of-words based on point features in the previous works, however, in this paper we propose a line-based method to solve the loop closure in the corridor environments. We used both the floor line and the anchored vanishing point as the loop closing feature, and a two-step loop closure algorithm was devised to detect a known place and perform the global pose correction. We propose an anchored vanishing point as a novel loop closure feature, as it includes position information and represents the vanishing points in bi-direction. In our system, the accumulated heading error is reduced using an observation of a previously registered anchored vanishing points firstly, and the observation of known floor lines allows for further pose correction. Experimental results show that our method is very efficient in a structured indoor environment as a suitable loop closure solution.

Development of 3D Point Cloud Mapping System Using 2D LiDAR and Commercial Visual-inertial Odometry Sensor (2차원 라이다와 상업용 영상-관성 기반 주행 거리 기록계를 이용한 3차원 점 구름 지도 작성 시스템 개발)

  • Moon, Jongsik;Lee, Byung-Yoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.3
    • /
    • pp.107-111
    • /
    • 2021
  • A 3D point cloud map is an essential elements in various fields, including precise autonomous navigation system. However, generating a 3D point cloud map using a single sensor has limitations due to the price of expensive sensor. In order to solve this problem, we propose a precise 3D mapping system using low-cost sensor fusion. Generating a point cloud map requires the process of estimating the current position and attitude, and describing the surrounding environment. In this paper, we utilized a commercial visual-inertial odometry sensor to estimate the current position and attitude states. Based on the state value, the 2D LiDAR measurement values describe the surrounding environment to create a point cloud map. To analyze the performance of the proposed algorithm, we compared the performance of the proposed algorithm and the 3D LiDAR-based SLAM (simultaneous localization and mapping) algorithm. As a result, it was confirmed that a precise 3D point cloud map can be generated with the low-cost sensor fusion system proposed in this paper.

Map Error Measuring Mechanism Design and Algorithm Robust to Lidar Sparsity (라이다 점군 밀도에 강인한 맵 오차 측정 기구 설계 및 알고리즘)

  • Jung, Sangwoo;Jung, Minwoo;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.3
    • /
    • pp.189-198
    • /
    • 2021
  • In this paper, we introduce the software/hardware system that can reliably calculate the distance from sensor to the model regardless of point cloud density. As the 3d point cloud map is widely adopted for SLAM and computer vision, the accuracy of point cloud map is of great importance. However, the 3D point cloud map obtained from Lidar may reveal different point cloud density depending on the choice of sensor, measurement distance and the object shape. Currently, when measuring map accuracy, high reflective bands are used to generate specific points in point cloud map where distances are measured manually. This manual process is time and labor consuming being highly affected by Lidar sparsity level. To overcome these problems, this paper presents a hardware design that leverage high intensity point from three planar surface. Furthermore, by calculating distance from sensor to the device, we verified that the automated method is much faster than the manual procedure and robust to sparsity by testing with RGB-D camera and Lidar. As will be shown, the system performance is not limited to indoor environment by progressing the experiment using Lidar sensor at outdoor environment.

Obstacle Detection and Safe Landing Site Selection for Delivery Drones at Delivery Destinations without Prior Information (사전 정보가 없는 배송지에서 장애물 탐지 및 배송 드론의 안전 착륙 지점 선정 기법)

  • Min Chol Seo;Sang Ik Han
    • Journal of Auto-vehicle Safety Association
    • /
    • v.16 no.2
    • /
    • pp.20-26
    • /
    • 2024
  • The delivery using drones has been attracting attention because it can innovatively reduce the delivery time from the time of order to completion of delivery compared to the current delivery system, and there have been pilot projects conducted for safe drone delivery. However, the current drone delivery system has the disadvantage of limiting the operational efficiency offered by fully autonomous delivery drones in that drones mainly deliver goods to pre-set landing sites or delivery bases, and the final delivery is still made by humans. In this paper, to overcome these limitations, we propose obstacle detection and landing site selection algorithm based on a vision sensor that enables safe drone landing at the delivery location of the product orderer, and experimentally prove the possibility of station-to-door delivery. The proposed algorithm forms a 3D map of point cloud based on simultaneous localization and mapping (SLAM) technology and presents a grid segmentation technique, allowing drones to stably find a landing site even in places without prior information. We aims to verify the performance of the proposed algorithm through streaming data received from the drone.

Observability Analysis of a Vision-INS Integrated Navigation System Using Landmark (비전센서와 INS 기반의 항법 시스템 구현 시 랜드마크 사용에 따른 가관측성 분석)

  • Won, Dae-Hee;Chun, Se-Bum;Sung, Sang-Kyung;Cho, Jin-Soo;Lee, Young-Jae
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.38 no.3
    • /
    • pp.236-242
    • /
    • 2010
  • A GNSS/INS integration system can not provide navigation solutions if there are no available satellites. To overcome this problem, a vision sensor is integrated with this system. Since generally a vision aided integration system uses only feature point to compute navigation solutions, it has a problem in observability. In this case, additional landmarks, which is priory known points, can improve the observability. In this paper, the observability is evaluated using TOM/SOM matrix and Eigenvalues. There are always the observability problems in the feature-point-only case, but the landmark-use case is fully observable after the $2^{nd}$ update time. Consequently the landmarks ensure full observability, so the system performance can be improved.

Shape Based Framework for Recognition and Tracking of Texture-free Objects for Submerged Robots in Structured Underwater Environment (수중로봇을 위한 형태를 기반으로 하는 인공표식의 인식 및 추종 알고리즘)

  • Han, Kyung-Min;Choi, Hyun-Taek
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.6
    • /
    • pp.91-98
    • /
    • 2011
  • This paper proposes an efficient and accurate vision based recognition and tracking framework for texture free objects. We approached this problem with a two phased algorithm: detection phase and tracking phase. In the detection phase, the algorithm extracts shape context descriptors that used for classifying objects into predetermined interesting targets. Later on, the matching result is further refined by a minimization technique. In the tracking phase, we resorted to meanshift tracking algorithm based on Bhattacharyya coefficient measurement. In summary, the contributions of our methods for the underwater robot vision are four folds: 1) Our method can deal with camera motion and scale changes of objects in underwater environment; 2) It is inexpensive vision based recognition algorithm; 3) The advantage of shape based method compared to a distinct feature point based method (SIFT) in the underwater environment with possible turbidity variation; 4) We made a quantitative comparison of our method with a few other well-known methods. The result is quite promising for the map based underwater SLAM task which is the goal of our research.

Localization using Ego Motion based on Fisheye Warping Image (어안 워핑 이미지 기반의 Ego motion을 이용한 위치 인식 알고리즘)

  • Choi, Yun Won;Choi, Kyung Sik;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.1
    • /
    • pp.70-77
    • /
    • 2014
  • This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System.

Two Feature Points Based Laser Scanner for Mobile Robot Navigation (레이저 센서에서 두 개의 특징점을 이용한 이동로봇의 항법)

  • Kim, Joo-Wan;Shim, Duk-Sun
    • Journal of Advanced Navigation Technology
    • /
    • v.18 no.2
    • /
    • pp.134-141
    • /
    • 2014
  • Mobile robots use various sensors for navigation such as wheel encoder, vision sensor, sonar, and laser sensors. Dead reckoning is used with wheel encoder, resulting in the accumulation of positioning errors. For that reason wheel encoder can not be used alone. Too much information of vision sensors leads to an increase in the number of features and complexity of perception scheme. Also Sonar sensor is not suitable for positioning because of its poor accuracy. On the other hand, laser sensor provides accurate distance information relatively. In this paper we propose to extract the angular information from the distance information of laser range finder and use the Kalman filter that match the heading and distance of the laser range finder and those of wheel encoder. For laser scanner with one feature point error may increase much when the feature point is variant or jumping to a new feature point. To solve the problem, we propose to use two feature points and show that the positioning error can be reduced much.