• Title/Summary/Keyword: Navigation feature

Search Result 228, Processing Time 0.042 seconds

Localization and Autonomous Navigation Using GPU-based SIFT and Virtual Force for Mobile Robots (GPU 기반 SIFT 방법과 가상의 힘을 이용한 이동 로봇의 위치 인식 및 자율 주행 제어)

  • Tak, Myung Hwan;Joo, Young Hoon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.10
    • /
    • pp.1738-1745
    • /
    • 2016
  • In this paper, we present localization and autonomous navigation method using GPU(Graphics Processing Unit)-based SIFT(Scale-Invariant Feature Transform) algorithm and virtual force method for mobile robots. To do this, at first, we propose the localization method to recognize the landmark using the GPU-based SIFT algorithm and to update the position using extended Kalman filter. And then, we propose the A-star algorithm for path planning and the virtual force method for autonomous navigation of the mobile robot. Finally, we demonstrate the effectiveness and applicability of the proposed method through some experiments using the mobile robot with OPRoS(Open Platform for Robotic Services).

A Study on Obstacle Detection for Mobile Robot Navigation (이동형 로보트 주행을 위한 장애물 검출에 관한 연구)

  • Yun, Ji-Ho;Woo, Dong-Min
    • Proceedings of the KIEE Conference
    • /
    • 1995.11a
    • /
    • pp.587-589
    • /
    • 1995
  • The safe navigation of a mobile robot requires the recognition of the environment in terms of vision processing. To be guided in the given path, the robot should acquire the information about where the wall and corridor are located. Also unexpected obstacles should be detected as rapid as possible for the safe obstacle avoidance. In the paper, we assume that the mobile robot should be navigated in the flat surface. In terms of this assumption we simplify the correspondence problem by the free navigation surface and matching features in that coordinate system. Basically, the vision processing system adopts line segment of edge as the feature. The extracted line segments of edge out of both image are matched in the free nevigation surface. According to the matching result, each line segment is labeled by the attributes regarding obstacle and free surface and the 3D shape of obstacle is interpreted. This proposed vision processing method is verified in terms of various simulations and experimentation using real images.

  • PDF

Single Antenna Based GPS Signal Reception Condition Classification Using Machine Learning Approaches

  • Sanghyun Kim;Seunghyeon Park;Jiwon Seo
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.12 no.2
    • /
    • pp.149-155
    • /
    • 2023
  • In urban areas it can be difficult to utilize global navigation satellite systems (GNSS) due to signal reflections and blockages. It is thus crucial to detect reflected or blocked signals because they lead to significant degradation of GNSS positioning accuracy. In a previous study, a classifier for global positioning system (GPS) signal reception conditions was developed using three features and the support vector machine (SVM) algorithm. However, this classifier had limitations in its classification performance. Therefore, in this study, we developed an improved machine learning based method of classifying GPS signal reception conditions by including an additional feature with the existing features. Furthermore, we applied various machine learning classification algorithms. As a result, when tested with datasets collected in different environments than the training environment, the classification accuracy improved by nine percentage points compared to the existing method, reaching up to 58%.

Comparative Performance Analysis of Feature Detection and Matching Methods for Lunar Terrain Images (달 지형 영상에서 특징점 검출 및 정합 기법의 성능 비교 분석)

  • Hong, Sungchul;Shin, Hyu-Soung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.4
    • /
    • pp.437-444
    • /
    • 2020
  • A lunar rover's optical camera is used to provide navigation and terrain information in an exploration zone. However, due to the scant presence of atmosphere, the Moon has homogeneous terrain with dark soil. Also, in extreme environments, the rover has limited data storage with low computation capability. Thus, for successful exploration, it is required to examine feature detection and matching methods which are robust to lunar terrain and environmental characteristics. In this research, SIFT, SURF, BRISK, ORB, and AKAZE are comparatively analyzed with lunar terrain images from a lunar rover. Experimental results show that SIFT and AKAZE are most robust for lunar terrain characteristics. AKAZE detects less quantity of feature points than SIFT, but feature points are detected and matched with high precision and the least computational cost. AKAZE is adequate for fast and accurate navigation information. Although SIFT has the highest computational cost, the largest quantity of feature points are stably detected and matched. The rover periodically sends terrain images to Earth. Thus, SIFT is suitable for global 3D terrain map construction in that a large amount of terrain images can be processed on Earth. Study results are expected to provide a guideline to utilize feature detection and matching methods for future lunar exploration rovers.

A Terrain Analysis System for Global Path Planning of Unmanned Ground Vehicle (무인지상차량의 전역경로계획을 위한 지형정보 분석 시스템)

  • Park, Won-Ik;Lee, Ho-Joo;Kim, Do-Jong
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.16 no.5
    • /
    • pp.583-589
    • /
    • 2013
  • In this paper, we proposed a system that efficiently provides support maps which includes the grid based terrain analysis information. To do this, we use the FDB which is defined as a GIS database that contains features with attributes attached to the features. The FDB is composed of a number of features and feature classes. In order to create support maps, it is necessary to classify feature classes that are associated with each support map and to search them in a grid map. The proposed system use a ontology model to classify semantically feature classes and the quad-tree data structure to find them in a grid map quickly. Therefore, our system is expected to be utilized for global path planning of UGV. In this paper, we show the possibility through an experimental implementation.

Improvement of Visual Path Following through Velocity Variation (속도 가변을 통한 영상교시 기반 주행 알고리듬 성능 향상)

  • Choi, I-Sak;Ha, Jong-Eun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.4
    • /
    • pp.375-381
    • /
    • 2011
  • This paper deals with the improvement of visual path following through velocity variation according to the coordinate of feature points. Visual path follow first teaches driving path by selecting milestone images then follows the route by comparing the milestone image and current image. We follow the visual path following algorithm of Chen and Birchfield [8]. In [8], they use fixed translational and rotational velocity. We propose an algorithm that uses different translational velocity according to the driving condition. Translational velocity is adjusted according to the variation of the coordinate of feature points on image. Experimental results including diverse indoor cases show the feasibility of the proposed algorithm.

A Study on the Construction of Locomotion Map of Motorized Wheelchair using a Camera Calibration (카메라 교정에 의한 전동휠체어의 위치 주행지도 구성에 관한 연구)

  • Shin, D.S.;Moon, C.H.;Hong, S.H.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1996 no.11
    • /
    • pp.95-98
    • /
    • 1996
  • In this paper, The vehicle's path construction method for motorized wheelchair's autonomous navigation in a building through analysis of a corridor image using vision system has been proposed and We detected lines of vertical axis through camera distortion parameter, which was measured by camera calibration in a corridor image. Then we got the feature points in the lines. We analyzed the distance of feature points and what is feature points. we reconstructed corridor image to vehicle's path.

  • PDF

Condition-invariant Place Recognition Using Deep Convolutional Auto-encoder (Deep Convolutional Auto-encoder를 이용한 환경 변화에 강인한 장소 인식)

  • Oh, Junghyun;Lee, Beomhee
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.8-13
    • /
    • 2019
  • Visual place recognition is widely researched area in robotics, as it is one of the elemental requirements for autonomous navigation, simultaneous localization and mapping for mobile robots. However, place recognition in changing environment is a challenging problem since a same place look different according to the time, weather, and seasons. This paper presents a feature extraction method using a deep convolutional auto-encoder to recognize places under severe appearance changes. Given database and query image sequences from different environments, the convolutional auto-encoder is trained to predict the images of the desired environment. The training process is performed by minimizing the loss function between the predicted image and the desired image. After finishing the training process, the encoding part of the structure transforms an input image to a low dimensional latent representation, and it can be used as a condition-invariant feature for recognizing places in changing environment. Experiments were conducted to prove the effective of the proposed method, and the results showed that our method outperformed than existing methods.