• Title/Summary/Keyword: 3D vision slam

Search Result 8, Processing Time 0.026 seconds

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

Implementation of the SLAM System Using a Single Vision and Distance Sensors (단일 영상과 거리센서를 이용한 SLAM시스템 구현)

  • Yoo, Sung-Goo;Chong, Kil-To
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.6
    • /
    • pp.149-156
    • /
    • 2008
  • SLAM(Simultaneous Localization and Mapping) system is to find a global position and build a map with sensing data when an unmanned-robot navigates an unknown environment. Two kinds of system were developed. One is used distance measurement sensors such as an ultra sonic and a laser sensor. The other is used stereo vision system. The distance measurement SLAM with sensors has low computing time and low cost, but precision of system can be somewhat worse by measurement error or non-linearity of the sensor In contrast, stereo vision system can accurately measure the 3D space area, but it needs high-end system for complex calculation and it is an expensive tool. In this paper, we implement the SLAM system using a single camera image and a PSD sensors. It detects obstacles from the front PSD sensor and then perceive size and feature of the obstacles by image processing. The probability SLAM was implemented using the data of sensor and image and we verify the performance of the system by real experiment.

3D Map Construction from Spherical Video using Fisheye ORB-SLAM Algorithm (어안 ORB-SLAM 알고리즘을 사용한 구면 비디오로부터의 3D 맵 생성)

  • Kim, Ki-Sik;Park, Jong-Seung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.1080-1083
    • /
    • 2020
  • 본 논문에서는 구면 파노라마를 기반으로 하는 SLAM 시스템을 제안한다. Vision SLAM은 촬영하는 시야각이 넓을수록 적은 프레임으로도 주변을 빠르게 파악할 수 있고, 많은 양의 주변 데이터를 이용해 더욱 안정적인 추정이 가능하다. 구면 파노라마 비디오는 가장 화각이 넓은 영상으로, 모든 방향을 활용할 수 있기 때문에 Fisheye 영상보다 더욱 빠르게 3D 맵을 확장해나갈 수 있다. 기존의 시스템 중 Fisheye 영상을 기반으로 하는 시스템은 전면 광각만을 수용할 수 있기 때문에 구면 파노라마를 입력으로 하는 경우보다 적용 범위가 줄어들게 된다. 본 논문에서는 기존에 Fisheye 비디오를 기반으로 하는 SLAM 시스템을 구면 파노라마의 영역으로 확장하는 방법을 제안한다. 제안 방법은 카메라의 투영 모델이 요구하는 파라미터를 정확히 계산하고, Dual Fisheye Model을 통해 모든 시야각을 손실 없이 활용한다.

The GEO-Localization of a Mobile Mapping System (모바일 매핑 시스템의 GEO 로컬라이제이션)

  • Chon, Jae-Choon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.5
    • /
    • pp.555-563
    • /
    • 2009
  • When a mobile mapping system or a robot is equipped with only a GPS (Global Positioning System) and multiple stereo camera system, a transformation from a local camera coordinate system to GPS coordinate system is required to link camera poses and 3D data by V-SLAM (Vision based Simultaneous Localization And Mapping) to GIS data or remove the accumulation error of those camera poses. In order to satisfy the requirements, this paper proposed a novel method that calculates a camera rotation in the GPS coordinate system using the three pairs of camera positions by GPS and V-SLAM, respectively. The propose method is composed of four simple steps; 1) calculate a quaternion for two plane's normal vectors based on each three camera positions to be parallel, 2) transfer the three camera positions by V-SLAM with the calculated quaternion 3) calculate an additional quaternion for mapping the second or third point among the transferred positions to a camera position by GPS, and 4) determine a final quaternion by multiplying the two quaternions. The final quaternion can directly transfer from a local camera coordinate system to the GPS coordinate system. Additionally, an update of the 3D data of captured objects based on view angles from the object to cameras is proposed. This paper demonstrated the proposed method through a simulation and an experiment.

Development of 3D Point Cloud Mapping System Using 2D LiDAR and Commercial Visual-inertial Odometry Sensor (2차원 라이다와 상업용 영상-관성 기반 주행 거리 기록계를 이용한 3차원 점 구름 지도 작성 시스템 개발)

  • Moon, Jongsik;Lee, Byung-Yoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.3
    • /
    • pp.107-111
    • /
    • 2021
  • A 3D point cloud map is an essential elements in various fields, including precise autonomous navigation system. However, generating a 3D point cloud map using a single sensor has limitations due to the price of expensive sensor. In order to solve this problem, we propose a precise 3D mapping system using low-cost sensor fusion. Generating a point cloud map requires the process of estimating the current position and attitude, and describing the surrounding environment. In this paper, we utilized a commercial visual-inertial odometry sensor to estimate the current position and attitude states. Based on the state value, the 2D LiDAR measurement values describe the surrounding environment to create a point cloud map. To analyze the performance of the proposed algorithm, we compared the performance of the proposed algorithm and the 3D LiDAR-based SLAM (simultaneous localization and mapping) algorithm. As a result, it was confirmed that a precise 3D point cloud map can be generated with the low-cost sensor fusion system proposed in this paper.

SLAM with Visually Salient Line Features in Indoor Hallway Environments (실내 복도 환경에서 선분 특징점을 이용한 비전 기반의 지도 작성 및 위치 인식)

  • An, Su-Yong;Kang, Jeong-Gwan;Lee, Lae-Kyeong;Oh, Se-Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.1
    • /
    • pp.40-47
    • /
    • 2010
  • This paper presents a simultaneous localization and mapping (SLAM) of an indoor hallway environment using Rao-Blackwellized particle filter (RBPF) along with a line segment as a landmark. Based on the fact that fluent line features can be extracted around the ceiling and side walls of hallway using vision sensor, a horizontal line segment is extracted from an edge image using Hough transform and is also tracked continuously by an optical flow method. A successive observation of a line segment gives initial state of the line in 3D space. For data association, registered feature and observed feature are matched in image space through a degree of overlap, an orientation of line, and a distance between two lines. Experiments show that a compact environmental map can be constructed with small number of horizontal line features in real-time.

Map Error Measuring Mechanism Design and Algorithm Robust to Lidar Sparsity (라이다 점군 밀도에 강인한 맵 오차 측정 기구 설계 및 알고리즘)

  • Jung, Sangwoo;Jung, Minwoo;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.3
    • /
    • pp.189-198
    • /
    • 2021
  • In this paper, we introduce the software/hardware system that can reliably calculate the distance from sensor to the model regardless of point cloud density. As the 3d point cloud map is widely adopted for SLAM and computer vision, the accuracy of point cloud map is of great importance. However, the 3D point cloud map obtained from Lidar may reveal different point cloud density depending on the choice of sensor, measurement distance and the object shape. Currently, when measuring map accuracy, high reflective bands are used to generate specific points in point cloud map where distances are measured manually. This manual process is time and labor consuming being highly affected by Lidar sparsity level. To overcome these problems, this paper presents a hardware design that leverage high intensity point from three planar surface. Furthermore, by calculating distance from sensor to the device, we verified that the automated method is much faster than the manual procedure and robust to sparsity by testing with RGB-D camera and Lidar. As will be shown, the system performance is not limited to indoor environment by progressing the experiment using Lidar sensor at outdoor environment.

Real-time Simultaneous Localization and Mapping (SLAM) for Vision-based Autonomous Navigation (영상기반 자동항법을 위한 실시간 위치인식 및 지도작성)

  • Lim, Hyon;Lim, Jongwoo;Kim, H. Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.39 no.5
    • /
    • pp.483-489
    • /
    • 2015
  • In this paper, we propose monocular visual simultaneous localization and mapping (SLAM) in the large-scale environment. The proposed method continuously computes the current 6-DoF camera pose and 3D landmarks position from video input. The proposed method successfully builds consistent maps from challenging outdoor sequences using a monocular camera as the only sensor. By using a binary descriptor and metric-topological mapping, the system demonstrates real-time performance on a large-scale outdoor dataset without utilizing GPUs or reducing input image size. The effectiveness of the proposed method is demonstrated on various challenging video sequences.