• Title/Summary/Keyword: vision slam

Search Result 40, Processing Time 0.019 seconds

A Camera Pose Estimation Method for Rectangle Feature based Visual SLAM (사각형 특징 기반 Visual SLAM을 위한 자세 추정 방법)

  • Lee, Jae-Min;Kim, Gon-Woo
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.1
    • /
    • pp.33-40
    • /
    • 2016
  • In this paper, we propose a method for estimating the pose of the camera using a rectangle feature utilized for the visual SLAM. A warped rectangle feature as a quadrilateral in the image by the perspective transformation is reconstructed by the Coupled Line Camera algorithm. In order to fully reconstruct a rectangle in the real world coordinate, the distance between the features and the camera is needed. The distance in the real world coordinate can be measured by using a stereo camera. Using properties of the line camera, the physical size of the rectangle feature can be induced from the distance. The correspondence between the quadrilateral in the image and the rectangle in the real world coordinate can restore the relative pose between the camera and the feature through obtaining the homography. In order to evaluate the performance, we analyzed the result of proposed method with its reference pose in Gazebo robot simulator.

The GEO-Localization of a Mobile Mapping System (모바일 매핑 시스템의 GEO 로컬라이제이션)

  • Chon, Jae-Choon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.5
    • /
    • pp.555-563
    • /
    • 2009
  • When a mobile mapping system or a robot is equipped with only a GPS (Global Positioning System) and multiple stereo camera system, a transformation from a local camera coordinate system to GPS coordinate system is required to link camera poses and 3D data by V-SLAM (Vision based Simultaneous Localization And Mapping) to GIS data or remove the accumulation error of those camera poses. In order to satisfy the requirements, this paper proposed a novel method that calculates a camera rotation in the GPS coordinate system using the three pairs of camera positions by GPS and V-SLAM, respectively. The propose method is composed of four simple steps; 1) calculate a quaternion for two plane's normal vectors based on each three camera positions to be parallel, 2) transfer the three camera positions by V-SLAM with the calculated quaternion 3) calculate an additional quaternion for mapping the second or third point among the transferred positions to a camera position by GPS, and 4) determine a final quaternion by multiplying the two quaternions. The final quaternion can directly transfer from a local camera coordinate system to the GPS coordinate system. Additionally, an update of the 3D data of captured objects based on view angles from the object to cameras is proposed. This paper demonstrated the proposed method through a simulation and an experiment.

Survey on Visual Navigation Technology for Unmanned Systems (무인 시스템의 자율 주행을 위한 영상기반 항법기술 동향)

  • Kim, Hyoun-Jin;Seo, Hoseong;Kim, Pyojin;Lee, Chung-Keun
    • Journal of Advanced Navigation Technology
    • /
    • v.19 no.2
    • /
    • pp.133-139
    • /
    • 2015
  • This paper surveys vision based autonomous navigation technologies for unmanned systems. Main branches of visual navigation technologies are visual servoing, visual odometry, and visual simultaneous localization and mapping (SLAM). Visual servoing provides velocity input which guides mobile system to desired pose. This input velocity is calculated from feature difference between desired image and acquired image. Visual odometry is the technology that estimates the relative pose between frames of consecutive image. This can improve the accuracy when compared with the exisiting dead-reckoning methods. Visual SLAM aims for constructing map of unknown environment and determining mobile system's location simultaneously, which is essential for operation of unmanned systems in unknown environments. The trend of visual navigation is grasped by examining foreign research cases related to visual navigation technology.

SLAM with Visually Salient Line Features in Indoor Hallway Environments (실내 복도 환경에서 선분 특징점을 이용한 비전 기반의 지도 작성 및 위치 인식)

  • An, Su-Yong;Kang, Jeong-Gwan;Lee, Lae-Kyeong;Oh, Se-Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.1
    • /
    • pp.40-47
    • /
    • 2010
  • This paper presents a simultaneous localization and mapping (SLAM) of an indoor hallway environment using Rao-Blackwellized particle filter (RBPF) along with a line segment as a landmark. Based on the fact that fluent line features can be extracted around the ceiling and side walls of hallway using vision sensor, a horizontal line segment is extracted from an edge image using Hough transform and is also tracked continuously by an optical flow method. A successive observation of a line segment gives initial state of the line in 3D space. For data association, registered feature and observed feature are matched in image space through a degree of overlap, an orientation of line, and a distance between two lines. Experiments show that a compact environmental map can be constructed with small number of horizontal line features in real-time.

The Design of Indoor Navigation using AR (AR을 활용한 실내 내비게이션의 설계)

  • Kim, Myung Seong;Kim, Seong Jo;Kim, Dong Hyun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.07a
    • /
    • pp.129-132
    • /
    • 2019
  • 본 기술의 발달에 따라 실내 공간이 점차 대형화되면서 실내공간은 복잡해졌으며, 이로 인해 원하는 장소를 찾기가 어려워졌다. 4차 산업혁명에 힘입어 앞서 언급한 문제들을 해결하기 위해 실내 내비게이션을 도입하려는 시도가 활발히 이루어지고 있다. 실내 내비게이션의 기술들로는 Wi-Fi, Bluetooth, Beacon, RFID, UWB 등이 있지만, 실내 건물 구조 특성상 여러 장애물들에 의해 신호 정보의 오차가 심하여 사용하기에 어려움이 있다. 이러한 문제점을 해결하기 위해 스마트폰에 내장된 IMU 센서 및 카메라 센서를 이용하여 동시적 위치 인식 및 지도 작성을 하는 SLAM 알고리즘으로 실내 내비게이션을 구현하고, 사용자가 보다 쉽게 길을 찾을 수 있게 접근성이 높은 스마트폰과 AR을 이용하여 어플리케이션을 설계하였다.

  • PDF

Obstacle Avoidance for Unmanned Air Vehicles Using Monocular-SLAM with Chain-Based Path Planning in GPS Denied Environments

  • Bharadwaja, Yathirajam;Vaitheeswaran, S.M;Ananda, C.M
    • Journal of Aerospace System Engineering
    • /
    • v.14 no.2
    • /
    • pp.1-11
    • /
    • 2020
  • Detecting obstacles and generating a suitable path to avoid obstacles in real time is a prime mission requirement for UAVs. In areas, close to buildings and people, detecting obstacles in the path and estimating its own position (egomotion) in GPS degraded/denied environments are usually addressed with vision-based Simultaneous Localization and Mapping (SLAM) techniques. This presents possibilities and challenges for the feasible path generation with constraints of vehicle dynamics in the configuration space. In this paper, a near real-time feasible path is shown to be generated in the ORB-SLAM framework using a chain-based path planning approach in a force field with dynamic constraints on path length and minimum turn radius. The chain-based path plan approach generates a set of nodes which moves in a force field that permits modifications of path rapidly in real time as the reward function changes. This is different from the usual approach of generating potentials in the entire search space around UAV, instead a set of connected waypoints in a simulated chain. The popular ORB-SLAM, suited for real time approach is used for building the map of the environment and UAV position and the UAV path is then generated continuously in the shortest time to navigate to the goal position. The principal contribution are (a) Chain-based path planning approach with built in obstacle avoidance in conjunction with ORB-SLAM for the first time, (b) Generation of path with minimum overheads and (c) Implementation in near real time.

A Study on Fisheye Lens based Features on the Ceiling for Self-Localization (실내 환경에서 자기위치 인식을 위한 어안렌즈 기반의 천장의 특징점 모델 연구)

  • Choi, Chul-Hee;Choi, Byung-Jae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.4
    • /
    • pp.442-448
    • /
    • 2011
  • There are many research results about a self-localization technique of mobile robot. In this paper we present a self-localization technique based on the features of ceiling vision using a fisheye lens. The features obtained by SIFT(Scale Invariant Feature Transform) can be used to be matched between the previous image and the current image and then its optimal function is derived. The fisheye lens causes some distortion on its images naturally. So it must be calibrated by some algorithm. We here propose some methods for calibration of distorted images and design of a geometric fitness model. The proposed method is applied to laboratory and aile environment. We show its feasibility at some indoor environment.

Real-Time Mapping of Mobile Robot on Stereo Vision (스테레오 비전 기반 이동 로봇의 실시간 지도 작성 기법)

  • Han, Cheol-Hun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.1
    • /
    • pp.60-65
    • /
    • 2010
  • This paper describes the results of 2D mapping, feature detection and matching to create the surrounding environment in the mounted stereo camera on Mobile robot. Extract method of image's feature in real-time processing for quick operation uses the edge detection and Sum of Absolute Difference(SAD), stereo matching technique can be obtained through the correlation coefficient. To estimate the location of a mobile robot using ZigBee beacon and encoders mounted on the robot is estimated by Kalman filter. In addition, the merged gyro scope to measure compass is possible to generate map during mobile robot is moving. The Simultaneous Localization and Mapping (SLAM) of mobile robot technology with an intelligent robot can be applied efficiently in human life would be based.

Visual-Attention Using Corner Feature Based SLAM in Indoor Environment (실내 환경에서 모서리 특징을 이용한 시각 집중 기반의 SLAM)

  • Shin, Yong-Min;Yi, Chu-Ho;Suh, Il-Hong;Choi, Byung-Uk
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.4
    • /
    • pp.90-101
    • /
    • 2012
  • The landmark selection is crucial to successful perform in SLAM(Simultaneous Localization and Mapping) with a mono camera. Especially, in unknown environment, automatic landmark selection is needed since there is no advance information about landmark. In this paper, proposed visual attention system which modeled human's vision system will be used in order to select landmark automatically. The edge feature is one of the most important element for attention in previous visual attention system. However, when the edge feature is used in complicated indoor area, the response of complicated area disappears, and between flat surfaces are getting higher. Also, computation cost increases occurs due to the growth of the dimensionality since it uses the responses for 4 directions. This paper suggests to use a corner feature in order to solve or prevent the problems mentioned above. Using a corner feature can also increase the accuracy of data association by concentrating on area which is more complicated and informative in indoor environments. Finally, this paper will prove that visual attention system based on corner feature can be more effective in SLAM compared to previous method by experiment.

Robust Vision-Based Autonomous Navigation Against Environment Changes (환경 변화에 강인한 비전 기반 로봇 자율 주행)

  • Kim, Jungho;Kweon, In So
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.2
    • /
    • pp.57-65
    • /
    • 2008
  • Recently many researches on intelligent robots have been studied. An intelligent robot is capable of recognizing environments or objects to autonomously perform specific tasks using sensor readings. One of fundamental problems in vision-based robot applications is to recognize where it is and to decide safe path to perform autonomous navigation. However, previous approaches only consider well-organized environments that there is no moving object and environment changes. In this paper, we introduce a novel navigation strategy to handle occlusions caused by moving objects using various computer vision techniques. Experimental results demonstrate the capability to overcome such difficulties for autonomous navigation.

  • PDF