• 제목/요약/키워드: vision slam

검색결과 40건 처리시간 0.022초

SLAM 기반 GPS/INS/영상센서를 결합한 헬리콥터 항법시스템의 구성 (SLAM Aided GPS/INS/Vision Navigation System for Helicopter)

  • 김재형;유준;곽휘권
    • 제어로봇시스템학회논문지
    • /
    • 제14권8호
    • /
    • pp.745-751
    • /
    • 2008
  • This paper presents a framework for GPS/INS/Vision based navigation system of helicopters. GPS/INS coupled algorithm has weak points such as GPS blockage and jamming, while the helicopter is a speedy and high dynamical vehicle amenable to lose the GPS signal. In case of the vision sensor, it is not affected by signal jamming and also navigation error is not accumulated. So, we have implemented an GPS/INS/Vision aided navigation system providing the robust localization suitable for helicopters operating in various environments. The core algorithm is the vision based simultaneous localization and mapping (SLAM) technique. For the verification of the SLAM algorithm, we performed flight tests. From the tests, we confirm the developed system is robust enough under the GPS blockage. The system design, software algorithm, and flight test results are described.

저조도 환경에서 Visual SLAM을 위한 이미지 개선 방법 (Image Enhancement for Visual SLAM in Low Illumination)

  • 유동길;정지훈;전형준;한창완;박일우;오정현
    • 로봇학회논문지
    • /
    • 제18권1호
    • /
    • pp.66-71
    • /
    • 2023
  • As cameras have become primary sensors for mobile robots, vision based Simultaneous Localization and Mapping (SLAM) has achieved impressive results with the recent development of computer vision and deep learning. However, vision information has a disadvantage in that a lot of information disappears in a low-light environment. To overcome the problem, we propose an image enhancement method to perform visual SLAM in a low-light environment. Using the deep generative adversarial models and modified gamma correction, the quality of low-light images were improved. The proposed method is less sharp than the existing method, but it can be applied to ORB-SLAM in real time by dramatically reducing the amount of computation. The experimental results were able to prove the validity of the proposed method by applying to public Dataset TUM and VIVID++.

단일 영상과 거리센서를 이용한 SLAM시스템 구현 (Implementation of the SLAM System Using a Single Vision and Distance Sensors)

  • 유성구;정길도
    • 전자공학회논문지SC
    • /
    • 제45권6호
    • /
    • pp.149-156
    • /
    • 2008
  • SLAM(Simultaneous Localization and Mapping)은 무인 로봇 자동 항법시스템의 중요 기술로 센서 데이터로부터 로봇의 위치를 결정하고 기하학적 맵을 구성하는 것이다. 기존 방법으로는 초음파, 레이저 등의 거리 측정 센서를 이용해 로봇의 전역 위치를 찾는 방법과 스테레오 비전을 통한 방법이 개발되었다. 거리 측정 센서만으로 구성한 SLAM 시스템은 계산량이 간소하고 비용이 적게 들지만 센서의 오차나 비선형에 의해 정밀도가 조금 떨어진다. 이에 반해 스테레오 비전 시스템은 3차원 공간영역을 정확히 측정할 수 있지만 계산량이 많아 고사양의 시스템을 요구하고 스테레오 시스템 또한 고가이다. 따라서 본 논문에서는 단일 카메라 영상과 PSD(position sensitive device) 센서를 사용하여 SLAM을 구현하였다. 전방향의 PSD 센서로부터 일정 거리의 장애물을 감지하고 전면 카메라의 영상처리를 통해 장애물의 크기 및 특징을 감지한다. 위의 데이터를 통해 확률분포 SLAM을 구성하였고 실제 구현을 통해 성능검증을 하였다.

어안 이미지 기반의 전방향 영상 SLAM을 이용한 충돌 회피 (Collision Avoidance Using Omni Vision SLAM Based on Fisheye Image)

  • 최윤원;최정원;임성규;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제22권3호
    • /
    • pp.210-216
    • /
    • 2016
  • This paper presents a novel collision avoidance technique for mobile robots based on omni-directional vision simultaneous localization and mapping (SLAM). This method estimates the avoidance path and speed of a robot from the location of an obstacle, which can be detected using the Lucas-Kanade Optical Flow in images obtained through fish-eye cameras mounted on the robots. The conventional methods suggest avoidance paths by constructing an arbitrary force field around the obstacle found in the complete map obtained through the SLAM. Robots can also avoid obstacles by using the speed command based on the robot modeling and curved movement path of the robot. The recent research has been improved by optimizing the algorithm for the actual robot. However, research related to a robot using omni-directional vision SLAM to acquire around information at once has been comparatively less studied. The robot with the proposed algorithm avoids obstacles according to the estimated avoidance path based on the map obtained through an omni-directional vision SLAM using a fisheye image, and returns to the original path. In particular, it avoids the obstacles with various speed and direction using acceleration components based on motion information obtained by analyzing around the obstacles. The experimental results confirm the reliability of an avoidance algorithm through comparison between position obtained by the proposed algorithm and the real position collected while avoiding the obstacles.

어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM (3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner)

  • 최윤원;최정원;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제21권7호
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

국소 집단 최적화 기법을 적용한 비정형 해저면 환경에서의 비주얼 SLAM (Visual SLAM using Local Bundle Optimization in Unstructured Seafloor Environment)

  • 홍성훈;김진환
    • 로봇학회논문지
    • /
    • 제9권4호
    • /
    • pp.197-205
    • /
    • 2014
  • As computer vision algorithms are developed on a continuous basis, the visual information from vision sensors has been widely used in the context of simultaneous localization and mapping (SLAM), called visual SLAM, which utilizes relative motion information between images. This research addresses a visual SLAM framework for online localization and mapping in an unstructured seabed environment that can be applied to a low-cost unmanned underwater vehicle equipped with a single monocular camera as a major measurement sensor. Typically, an image motion model with a predefined dimensionality can be corrupted by errors due to the violation of the model assumptions, which may lead to performance degradation of the visual SLAM estimation. To deal with the erroneous image motion model, this study employs a local bundle optimization (LBO) scheme when a closed loop is detected. The results of comparison between visual SLAM estimation with LBO and the other case are presented to validate the effectiveness of the proposed methodology.

가우시안 프로세스를 이용한 실내 환경에서 소형무인기에 적합한 SLAM 시스템 개발 (Development of a SLAM System for Small UAVs in Indoor Environments using Gaussian Processes)

  • 전영산;최종은;이정욱
    • 제어로봇시스템학회논문지
    • /
    • 제20권11호
    • /
    • pp.1098-1102
    • /
    • 2014
  • Localization of aerial vehicles and map building of flight environments are key technologies for the autonomous flight of small UAVs. In outdoor environments, an unmanned aircraft can easily use a GPS (Global Positioning System) for its localization with acceptable accuracy. However, as the GPS is not available for use in indoor environments, the development of a SLAM (Simultaneous Localization and Mapping) system that is suitable for small UAVs is therefore needed. In this paper, we suggest a vision-based SLAM system that uses vision sensors and an AHRS (Attitude Heading Reference System) sensor. Feature points in images captured from the vision sensor are obtained by using GPU (Graphics Process Unit) based SIFT (Scale-invariant Feature Transform) algorithm. Those feature points are then combined with attitude information obtained from the AHRS to estimate the position of the small UAV. Based on the location information and color distribution, a Gaussian process model is generated, which could be a map. The experimental results show that the position of a small unmanned aircraft is estimated properly and the map of the environment is constructed by using the proposed method. Finally, the reliability of the proposed method is verified by comparing the difference between the estimated values and the actual values.

천장 조명의 위치와 방위 정보를 이용한 모노카메라와 오도메트리 정보 기반의 SLAM (Monocular Vision and Odometry-Based SLAM Using Position and Orientation of Ceiling Lamps)

  • 황서연;송재복
    • 제어로봇시스템학회논문지
    • /
    • 제17권2호
    • /
    • pp.164-170
    • /
    • 2011
  • This paper proposes a novel monocular vision-based SLAM (Simultaneous Localization and Mapping) method using both position and orientation information of ceiling lamps. Conventional approaches used corner or line features as landmarks in their SLAM algorithms, but these methods were often unable to achieve stable navigation due to a lack of reliable visual features on the ceiling. Since lamp features are usually placed some distances from each other in indoor environments, they can be robustly detected and used as reliable landmarks. We used both the position and orientation of a lamp feature to accurately estimate the robot pose. Its orientation is obtained by calculating the principal axis from the pixel distribution of the lamp area. Both corner and lamp features are used as landmarks in the EKF (Extended Kalman Filter) to increase the stability of the SLAM process. Experimental results show that the proposed scheme works successfully in various indoor environments.

가정환경을 위한 실용적인 SLAM 기법 개발 : 비전 센서와 초음파 센서의 통합 (A Practical Solution toward SLAM in Indoor environment Based on Visual Objects and Robust Sonar Features)

  • 안성환;최진우;최민용;정완균
    • 로봇학회논문지
    • /
    • 제1권1호
    • /
    • pp.25-35
    • /
    • 2006
  • Improving practicality of SLAM requires various sensors to be fused effectively in order to cope with uncertainty induced from both environment and sensors. In this case, combining sonar and vision sensors possesses numerous advantages of economical efficiency and complementary cooperation. Especially, it can remedy false data association and divergence problem of sonar sensors, and overcome low frequency SLAM update caused by computational burden and weakness in illumination changes of vision sensors. In this paper, we propose a SLAM method to join sonar sensors and stereo camera together. It consists of two schemes, extracting robust point and line features from sonar data and recognizing planar visual objects using multi-scale Harris corner detector and its SIFT descriptor from pre-constructed object database. And fusing sonar features and visual objects through EKF-SLAM can give correct data association via object recognition and high frequency update via sonar features. As a result, it can increase robustness and accuracy of SLAM in indoor environment. The performance of the proposed algorithm was verified by experiments in home -like environment.

  • PDF

시차변화(Disparity Change)와 장면의 부분 분할을 이용한 SLAM 방법 (SLAM Method by Disparity Change and Partial Segmentation of Scene Structure)

  • 최재우;이철희;임창경;홍현기
    • 전자공학회논문지
    • /
    • 제52권8호
    • /
    • pp.132-139
    • /
    • 2015
  • 카메라를 이용하는 시각(visual) SLAM(Simultaneous Localization And Mapping)은 로봇의 위치 등을 파악하는데 널리 이용되고 있다. 일반적으로 시각 SLAM은 움직임이 없는 고정된 특징점을 대상으로 연속적인 시퀀스 상에서 카메라의 움직임을 추정한다. 따라서 이동하는 객체가 많이 존재하는 상황에서는 안정적인 결과를 기대하기 어렵다. 본 논문에서는 이동 객체가 많은 상황에서 스테레오 카메라를 이용한 SLAM을 안정화하는 방법을 제안한다. 먼저, 스테레오 카메라를 이용하여 깊이영상을 추출하고 옵티컬 플로우를 계산한다. 그리고 좌우 영상의 옵티컬 플로우를 이용하여 시차변화(disparity change)를 계산한다. 그리고 깊이 영상에서 사람과 같이 움직이는 객체에 대한 ROI(Region Of Interest)를 구한다. 실내 상황에서는 벽과 같은 정적인 평면들이 움직이는 영역으로 잘못 판단되는 경우가 자주 발생한다. 이런 문제점을 해결하기 위해 깊이 영상을 X-Z 평면으로 사영하고 허프(hough) 변환하여 장면을 구성하는 평면을 결정한다. 앞의 과정에서 판단된 이동 객체 중에서 벽과 같은 장면 요소를 제외한다. 제안된 방법을 통해 정적인 특징점이 요구되는 SLAM의 성능을 보다 안정화할 수 있음을 확인하였다.