• Title/Summary/Keyword: Visual simultaneous localization and mapping

검색결과 24건 처리시간 0.025초

2차원 라이다와 상업용 영상-관성 기반 주행 거리 기록계를 이용한 3차원 점 구름 지도 작성 시스템 개발 (Development of 3D Point Cloud Mapping System Using 2D LiDAR and Commercial Visual-inertial Odometry Sensor)

  • 문종식;이병윤
    • 대한임베디드공학회논문지
    • /
    • 제16권3호
    • /
    • pp.107-111
    • /
    • 2021
  • A 3D point cloud map is an essential elements in various fields, including precise autonomous navigation system. However, generating a 3D point cloud map using a single sensor has limitations due to the price of expensive sensor. In order to solve this problem, we propose a precise 3D mapping system using low-cost sensor fusion. Generating a point cloud map requires the process of estimating the current position and attitude, and describing the surrounding environment. In this paper, we utilized a commercial visual-inertial odometry sensor to estimate the current position and attitude states. Based on the state value, the 2D LiDAR measurement values describe the surrounding environment to create a point cloud map. To analyze the performance of the proposed algorithm, we compared the performance of the proposed algorithm and the 3D LiDAR-based SLAM (simultaneous localization and mapping) algorithm. As a result, it was confirmed that a precise 3D point cloud map can be generated with the low-cost sensor fusion system proposed in this paper.

Visual SLAM을 통해 획득한 공간 지도의 완성도 평가 시스템 (An Evaluation System to Determine the Completeness of a Space Map Obtained by Visual SLAM)

  • 김한솔;감제원;황성수
    • 한국멀티미디어학회논문지
    • /
    • 제22권4호
    • /
    • pp.417-423
    • /
    • 2019
  • This paper presents an evaluation system to determine the completeness of a space map obtained by a visual SLAM(Simultaneous Localization And Mapping) algorithm. The proposed system consists of three parts. First, the proposed system detects the occurrence of loop closing to confirm that users acquired the information from all directions. Thereafter, the acquired map is divided with regular intervals and is verified whether each area has enough map points to successfully estimate users' position. Finally, to check the effectiveness of each map point, the system checks whether the map points are identifiable even at the location where there is a large distance difference from the acquisition position. Experimental results show that space maps whose completeness is proven by the proposed system has higher stability and accuracy in terms of position estimation than other maps that are not proven.

저조도 환경에서 Visual SLAM을 위한 이미지 개선 방법 (Image Enhancement for Visual SLAM in Low Illumination)

  • 유동길;정지훈;전형준;한창완;박일우;오정현
    • 로봇학회논문지
    • /
    • 제18권1호
    • /
    • pp.66-71
    • /
    • 2023
  • As cameras have become primary sensors for mobile robots, vision based Simultaneous Localization and Mapping (SLAM) has achieved impressive results with the recent development of computer vision and deep learning. However, vision information has a disadvantage in that a lot of information disappears in a low-light environment. To overcome the problem, we propose an image enhancement method to perform visual SLAM in a low-light environment. Using the deep generative adversarial models and modified gamma correction, the quality of low-light images were improved. The proposed method is less sharp than the existing method, but it can be applied to ORB-SLAM in real time by dramatically reducing the amount of computation. The experimental results were able to prove the validity of the proposed method by applying to public Dataset TUM and VIVID++.

특징점 기반 단안 영상 SLAM의 최적화 기법 및 필터링 기법 성능 분석 (Performance Analysis of Optimization Method and Filtering Method for Feature-based Monocular Visual SLAM)

  • 전진석;김효중;심덕선
    • 전기학회논문지
    • /
    • 제68권1호
    • /
    • pp.182-188
    • /
    • 2019
  • Autonomous mobile robots need SLAM (simultaneous localization and mapping) to look for the location and simultaneously to make the map around the location. In order to achieve visual SLAM, it is necessary to form an algorithm that detects and extracts feature points from camera images, and gets the camera pose and 3D points of the features. In this paper, we propose MPROSAC algorithm which combines MSAC and PROSAC, and compare the performance of optimization method and the filtering method for feature-based monocular visual SLAM. Sparse Bundle Adjustment (SBA) is used for the optimization method and the extended Kalman filter is used for the filtering method.

시차변화(Disparity Change)와 장면의 부분 분할을 이용한 SLAM 방법 (SLAM Method by Disparity Change and Partial Segmentation of Scene Structure)

  • 최재우;이철희;임창경;홍현기
    • 전자공학회논문지
    • /
    • 제52권8호
    • /
    • pp.132-139
    • /
    • 2015
  • 카메라를 이용하는 시각(visual) SLAM(Simultaneous Localization And Mapping)은 로봇의 위치 등을 파악하는데 널리 이용되고 있다. 일반적으로 시각 SLAM은 움직임이 없는 고정된 특징점을 대상으로 연속적인 시퀀스 상에서 카메라의 움직임을 추정한다. 따라서 이동하는 객체가 많이 존재하는 상황에서는 안정적인 결과를 기대하기 어렵다. 본 논문에서는 이동 객체가 많은 상황에서 스테레오 카메라를 이용한 SLAM을 안정화하는 방법을 제안한다. 먼저, 스테레오 카메라를 이용하여 깊이영상을 추출하고 옵티컬 플로우를 계산한다. 그리고 좌우 영상의 옵티컬 플로우를 이용하여 시차변화(disparity change)를 계산한다. 그리고 깊이 영상에서 사람과 같이 움직이는 객체에 대한 ROI(Region Of Interest)를 구한다. 실내 상황에서는 벽과 같은 정적인 평면들이 움직이는 영역으로 잘못 판단되는 경우가 자주 발생한다. 이런 문제점을 해결하기 위해 깊이 영상을 X-Z 평면으로 사영하고 허프(hough) 변환하여 장면을 구성하는 평면을 결정한다. 앞의 과정에서 판단된 이동 객체 중에서 벽과 같은 장면 요소를 제외한다. 제안된 방법을 통해 정적인 특징점이 요구되는 SLAM의 성능을 보다 안정화할 수 있음을 확인하였다.

Mobile Robot Localization in Geometrically Similar Environment Combining Wi-Fi with Laser SLAM

  • Gengyu Ge;Junke Li;Zhong Qin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권5호
    • /
    • pp.1339-1355
    • /
    • 2023
  • Localization is a hot research spot for many areas, especially in the mobile robot field. Due to the weak signal of the global positioning system (GPS), the alternative schemes in an indoor environment include wireless signal transmitting and receiving solutions, laser rangefinder to build a map followed by a re-localization stage and visual positioning methods, etc. Among all wireless signal positioning techniques, Wi-Fi is the most common one. Wi-Fi access points are installed in most indoor areas of human activities, and smart devices equipped with Wi-Fi modules can be seen everywhere. However, the localization of a mobile robot using a Wi-Fi scheme usually lacks orientation information. Besides, the distance error is large because of indoor signal interference. Another research direction that mainly refers to laser sensors is to actively detect the environment and achieve positioning. An occupancy grid map is built by using the simultaneous localization and mapping (SLAM) method when the mobile robot enters the indoor environment for the first time. When the robot enters the environment again, it can localize itself according to the known map. Nevertheless, this scheme only works effectively based on the prerequisite that those areas have salient geometrical features. If the areas have similar scanning structures, such as a long corridor or similar rooms, the traditional methods always fail. To address the weakness of the above two methods, this work proposes a coarse-to-fine paradigm and an improved localization algorithm that utilizes Wi-Fi to assist the robot localization in a geometrically similar environment. Firstly, a grid map is built by using laser SLAM. Secondly, a fingerprint database is built in the offline phase. Then, the RSSI values are achieved in the localization stage to get a coarse localization. Finally, an improved particle filter method based on the Wi-Fi signal values is proposed to realize a fine localization. Experimental results show that our approach is effective and robust for both global localization and the kidnapped robot problem. The localization success rate reaches 97.33%, while the traditional method always fails.

실내 환경에서 모서리 특징을 이용한 시각 집중 기반의 SLAM (Visual-Attention Using Corner Feature Based SLAM in Indoor Environment)

  • 신용민;이주호;서일홍;최병욱
    • 전자공학회논문지SC
    • /
    • 제49권4호
    • /
    • pp.90-101
    • /
    • 2012
  • 단일 카메라 기반의 SLAM(Simultaneous Localization and Mapping)을 성공적으로 수행하기 위해서는 표식 선택이 매우 중요하다. 특히, 미지의 환경에서는 표식에 대한 사정정보가 없기 때문에 표식을 자동 선택하는 기술이 필요하다. 본 논문에서는 표식을 자동 선택하기 위해 인간의 시각 집중 방식을 모델링한 시각 집중 시스템을 이용한다. 기존의 시각 집중 시스템에서 윤곽선(Edge)는 시각 집중을 위한 중요한 요소 중 하나이다. 하지만 복잡한 실내 환경에서 윤곽선의 응답을 사용할 경우 정규화 연산으로 인해 정보가 많은 복잡한 영역의 윤곽선에 대한 응답은 낮아지고 특징이 없는 평면이나 평면들 간의 경계에서 높은 값을 가지게 된다. 또한 네 방향에 대한 응답 값을 사용하기 때문에 특징의 차원수가 증가해서 연산량도 증가한다. 본 논문에서는 앞에서 언급한 문제점들을 해결하기 위해 모서리 특징의 사용을 제안한다. 모서리 특징을 사용함으로써 정보가 많은 복잡한 영역을 우선 집중시켜 데이터 연관(Data association)의 정확도도 높일 수 있다. 최종적으로는 코너특징을 사용한 시각 집중 시스템을 이용함으로써 기존 방식보다 SLAM 결과가 향상 된다는 것을 실험으로 보이도록 하겠다.

GPU 가속화를 통한 이미지 특징점 기반 RGB-D 3차원 SLAM (Image Feature-Based Real-Time RGB-D 3D SLAM with GPU Acceleration)

  • 이동화;김형진;명현
    • 제어로봇시스템학회논문지
    • /
    • 제19권5호
    • /
    • pp.457-461
    • /
    • 2013
  • This paper proposes an image feature-based real-time RGB-D (Red-Green-Blue Depth) 3D SLAM (Simultaneous Localization and Mapping) system. RGB-D data from Kinect style sensors contain a 2D image and per-pixel depth information. 6-DOF (Degree-of-Freedom) visual odometry is obtained through the 3D-RANSAC (RANdom SAmple Consensus) algorithm with 2D image features and depth data. For speed up extraction of features, parallel computation is performed with GPU acceleration. After a feature manager detects a loop closure, a graph-based SLAM algorithm optimizes trajectory of the sensor and builds a 3D point cloud based map.

천장 조명의 위치와 방위 정보를 이용한 모노카메라와 오도메트리 정보 기반의 SLAM (Monocular Vision and Odometry-Based SLAM Using Position and Orientation of Ceiling Lamps)

  • 황서연;송재복
    • 제어로봇시스템학회논문지
    • /
    • 제17권2호
    • /
    • pp.164-170
    • /
    • 2011
  • This paper proposes a novel monocular vision-based SLAM (Simultaneous Localization and Mapping) method using both position and orientation information of ceiling lamps. Conventional approaches used corner or line features as landmarks in their SLAM algorithms, but these methods were often unable to achieve stable navigation due to a lack of reliable visual features on the ceiling. Since lamp features are usually placed some distances from each other in indoor environments, they can be robustly detected and used as reliable landmarks. We used both the position and orientation of a lamp feature to accurately estimate the robot pose. Its orientation is obtained by calculating the principal axis from the pixel distribution of the lamp area. Both corner and lamp features are used as landmarks in the EKF (Extended Kalman Filter) to increase the stability of the SLAM process. Experimental results show that the proposed scheme works successfully in various indoor environments.

Deep Convolutional Auto-encoder를 이용한 환경 변화에 강인한 장소 인식 (Condition-invariant Place Recognition Using Deep Convolutional Auto-encoder)

  • 오정현;이범희
    • 로봇학회논문지
    • /
    • 제14권1호
    • /
    • pp.8-13
    • /
    • 2019
  • Visual place recognition is widely researched area in robotics, as it is one of the elemental requirements for autonomous navigation, simultaneous localization and mapping for mobile robots. However, place recognition in changing environment is a challenging problem since a same place look different according to the time, weather, and seasons. This paper presents a feature extraction method using a deep convolutional auto-encoder to recognize places under severe appearance changes. Given database and query image sequences from different environments, the convolutional auto-encoder is trained to predict the images of the desired environment. The training process is performed by minimizing the loss function between the predicted image and the desired image. After finishing the training process, the encoding part of the structure transforms an input image to a low dimensional latent representation, and it can be used as a condition-invariant feature for recognizing places in changing environment. Experiments were conducted to prove the effective of the proposed method, and the results showed that our method outperformed than existing methods.