• Title/Summary/Keyword: 영상기반항법

Search Result 55, Processing Time 0.212 seconds

Particle Filters using Gaussian Mixture Models for Vision-Based Navigation (영상 기반 항법을 위한 가우시안 혼합 모델 기반 파티클 필터)

  • Hong, Kyungwoo;Kim, Sungjoong;Bang, Hyochoong;Kim, Jin-Won;Seo, Ilwon;Pak, Chang-Ho
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.47 no.4
    • /
    • pp.274-282
    • /
    • 2019
  • Vision-based navigation of unmaned aerial vehicle is a significant technology that can reinforce the vulnerability of the widely used GPS/INS integrated navigation system. However, the existing image matching algorithms are not suitable for matching the aerial image with the database. For the reason, this paper proposes particle filters using Gaussian mixture models to deal with matching between aerial image and database for vision-based navigation. The particle filters estimate the position of the aircraft by comparing the correspondences of aerial image and database under the assumption of Gaussian mixture model. Finally, Monte Carlo simulation is presented to demonstrate performance of the proposed method.

Vision-based Navigation using Semantically Segmented Aerial Images (의미론적 분할된 항공 사진을 활용한 영상 기반 항법)

  • Hong, Kyungwoo;Kim, Sungjoong;Park, Junwoo;Bang, Hyochoong;Heo, Junhoe;Kim, Jin-Won;Pak, Chang-Ho;Seo, Songwon
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.48 no.10
    • /
    • pp.783-789
    • /
    • 2020
  • This paper proposes a new method for vision-based navigation using semantically segmented aerial images. Vision-based navigation can reinforce the vulnerability of the GPS/INS integrated navigation system. However, due to the visual and temporal difference between the aerial image and the database image, the existing image matching algorithms have difficulties being applied to aerial navigation problems. For this reason, this paper proposes a suitable matching method for the flight composed of navigational feature extraction through semantic segmentation followed by template matching. The proposed method shows excellent performance in simulation and even flight situations.

Particle Filter Based Feature Points Tracking for Vision Based Navigation System (영상기반항법을 위한 파티클 필터 기반의 특징점 추적 필터 설계)

  • Won, Dae-Hee;Sung, Sang-Kyung;Lee, Young-Jae
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.40 no.1
    • /
    • pp.35-42
    • /
    • 2012
  • In this study, a feature-points-tracking algorithm is suggested using a particle filter for vision based navigation system. By applying a dynamic model of the feature point, the tracking performance is increased in high dynamic condition, whereas a conventional KLT (Kanade-Lucas-Tomasi) cannot give a solution. Futhermore, the particle filter is introduced to cope with irregular characteristics of vision data. Post-processing of recorded vision data shows that the tracking performance of suggested algorithm is more robust than that of KLT in high dynamic condition.

Survey on Visual Navigation Technology for Unmanned Systems (무인 시스템의 자율 주행을 위한 영상기반 항법기술 동향)

  • Kim, Hyoun-Jin;Seo, Hoseong;Kim, Pyojin;Lee, Chung-Keun
    • Journal of Advanced Navigation Technology
    • /
    • v.19 no.2
    • /
    • pp.133-139
    • /
    • 2015
  • This paper surveys vision based autonomous navigation technologies for unmanned systems. Main branches of visual navigation technologies are visual servoing, visual odometry, and visual simultaneous localization and mapping (SLAM). Visual servoing provides velocity input which guides mobile system to desired pose. This input velocity is calculated from feature difference between desired image and acquired image. Visual odometry is the technology that estimates the relative pose between frames of consecutive image. This can improve the accuracy when compared with the exisiting dead-reckoning methods. Visual SLAM aims for constructing map of unknown environment and determining mobile system's location simultaneously, which is essential for operation of unmanned systems in unknown environments. The trend of visual navigation is grasped by examining foreign research cases related to visual navigation technology.

Relative Navigation for Autonomous Aerial Refueling Using Infra-red based Vision Systems (자동 공중급유를 위한 적외선 영상기반 상대 항법)

  • Yoon, Hyungchul;Yang, Youyoung;Leeghim, Henzeh
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.46 no.7
    • /
    • pp.557-566
    • /
    • 2018
  • In this paper, a vision-based relative navigation system is addressed for autonomous aerial refueling. In the air-to-air refueling, it is assumed that the tanker has the drogue, and the receiver has the probe. To obtain the relative information from the drogue, a vision-based imaging technology by infra-red camera is applied. In this process, the relative information is obtained by using Gaussian Least Squares Differential Correction (GLSDC), and Levenberg-Marquadt(LM), where the drouge geometric information calculated through image processing is used. These two approaches proposed in this paper are analyzed through numerical simulations.

Development of Real-Time Vision Aided Navigation Using EO/IR Image Information of Tactical Unmanned Aerial System in GPS Denied Environment (GPS 취약 환경에서 전술급 무인항공기의 주/야간 영상정보를 기반으로 한 실시간 비행체 위치 보정 시스템 개발)

  • Choi, SeungKie;Cho, ShinJe;Kang, SeungMo;Lee, KilTae;Lee, WonKeun;Jeong, GilSun
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.48 no.6
    • /
    • pp.401-410
    • /
    • 2020
  • In this study, a real-time Tactical UAS position compensation system based on image information developed to compensate for the weakness of location navigation information during GPS signal interference and jamming / spoofing attack is described. The Tactical UAS (KUS-FT) is capable of automatic flight by switching the mode from GPS/INS integrated navigation to DR/AHRS when GPS signal is lost. However, in the case of location navigation, errors accumulate over time due to dead reckoning (DR) using airspeed and azimuth which causes problems such as UAS positioning and data link antenna tracking. To minimize the accumulation of position error, based on the target data of specific region through image sensor, we developed a system that calculates the position using the UAS attitude, EO/IR (Electric Optic/Infra-Red) azimuth and elevation and numerical map data and corrects the calculated position in real-time. In addition, function and performance of the image information based real-time UAS position compensation system has been verified by ground test using GPS simulator and flight test in DR mode.

Real-time Simultaneous Localization and Mapping (SLAM) for Vision-based Autonomous Navigation (영상기반 자동항법을 위한 실시간 위치인식 및 지도작성)

  • Lim, Hyon;Lim, Jongwoo;Kim, H. Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.39 no.5
    • /
    • pp.483-489
    • /
    • 2015
  • In this paper, we propose monocular visual simultaneous localization and mapping (SLAM) in the large-scale environment. The proposed method continuously computes the current 6-DoF camera pose and 3D landmarks position from video input. The proposed method successfully builds consistent maps from challenging outdoor sequences using a monocular camera as the only sensor. By using a binary descriptor and metric-topological mapping, the system demonstrates real-time performance on a large-scale outdoor dataset without utilizing GPUs or reducing input image size. The effectiveness of the proposed method is demonstrated on various challenging video sequences.

Experimental result of Real-time Sonar-based SLAM for underwater robot (소나 기반 수중 로봇의 실시간 위치 추정 및 지도 작성에 대한 실험적 검증)

  • Lee, Yeongjun;Choi, Jinwoo;Ko, Nak Yong;Kim, Taejin;Choi, Hyun-Taek
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.3
    • /
    • pp.108-118
    • /
    • 2017
  • This paper presents experimental results of realtime sonar-based SLAM (simultaneous localization and mapping) using probability-based landmark-recognition. The sonar-based SLAM is used for navigation of underwater robot. Inertial sensor as IMU (Inertial Measurement Unit) and DVL (Doppler Velocity Log) and external information from sonar image processing are fused by Extended Kalman Filter (EKF) technique to get the navigation information. The vehicle location is estimated by inertial sensor data, and it is corrected by sonar data which provides relative position between the vehicle and the landmark on the bottom of the basin. For the verification of the proposed method, the experiments were performed in a basin environment using an underwater robot, yShark.

Development of a Test Environment for Performance Evaluation of the Vision-aided Navigation System for VTOL UAVs (수직 이착륙 무인 항공기용 영상보정항법 시스템 성능평가를 위한 검증환경 개발)

  • Sebeen Park;Hyuncheol Shin;Chul Joo Chung
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.6
    • /
    • pp.788-797
    • /
    • 2023
  • In this paper, we introduced a test environment to test a vision-aided navigation system, as an alternative navigation system when global positioning system (GPS) is unavailable, for vertical take-off and landing (VTOL) unmanned aerial system. It is efficient to use a virtual environment to test and evaluate the vision-aided navigation system under development, but currently no suitable equipment has been developed in Korea. Thus, the proposed test environment is developed to evaluate the performance of the navigation system by generating input signal modeling and simulating operation environment of the system, and by monitoring output signal. This paper comprehensively describes research procedure from derivation of requirements specifications to hardware/software design according to the requirements, and production of the test environment. This test environment was used for evaluating the vision-aided navigation algorithm which we are developing, and conducting simulation based pre-flight tests.

Vision-based Obstacle State Estimation and Collision Prediction using LSM and CPA for UAV Autonomous Landing (무인항공기의 자동 착륙을 위한 LSM 및 CPA를 활용한 영상 기반 장애물 상태 추정 및 충돌 예측)

  • Seongbong Lee;Cheonman Park;Hyeji Kim;Dongjin Lee
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.6
    • /
    • pp.485-492
    • /
    • 2021
  • Vision-based autonomous precision landing technology for UAVs requires precise position estimation and landing guidance technology. Also, for safe landing, it must be designed to determine the safety of the landing point against ground obstacles and to guide the landing only when the safety is ensured. In this paper, we proposes vision-based navigation, and algorithms for determining the safety of landing point to perform autonomous precision landings. To perform vision-based navigation, CNN technology is used to detect landing pad and the detection information is used to derive an integrated navigation solution. In addition, design and apply Kalman filters to improve position estimation performance. In order to determine the safety of the landing point, we perform the obstacle detection and position estimation in the same manner, and estimate the speed of the obstacle using LSM. The collision or not with the obstacle is determined based on the CPA calculated by using the estimated state of the obstacle. Finally, we perform flight test to verify the proposed algorithm.