• Title/Summary/Keyword: Vision Based Navigation

Search Result 194, Processing Time 0.03 seconds

Design for Back-up of Ship's Navigation System using UAV in Radio Frequency Interference Environment (전파간섭환경에서 UAV를 활용한 선박의 백업항법시스템 설계)

  • Park, Sul Gee;Son, Pyo-Woong
    • Journal of Advanced Navigation Technology
    • /
    • v.23 no.4
    • /
    • pp.289-295
    • /
    • 2019
  • Maritime back-up navigation system in port approach requires a horizontal accuracy of 10 meters in IALA (International Association of Lighthouse Authorities) recommendations. eLoran which is a best back-up navigation system that satisfies accuracy requirement has poor navigation performance depending signal environments. Especially, noise caused by multipath and electronic devices around eLoran antenna affects navigation performance. In this paper, Ship based Navigation Back-up system using UAV on Interference is designed to satisfy horizontal accuracy requirement. To improve the eLoran signal environment, UAVs are equipped with camera, IMU sensor and eLoran antenna and receivers. This proposed system is designed to receive eLoran signal through UAV-based receiver and control UAV's position and attitude within Landmark around area. The ship-based positioning using eLoran signal, vision and attitude information received from UAV satisfy resilient and robust navigation requirements.

Deep Reinforcement Learning in ROS-based autonomous robot navigation

  • Roland, Cubahiro;Choi, Donggyu;Jang, Jongwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.47-49
    • /
    • 2022
  • Robot navigation has seen a major improvement since the the rediscovery of the potential of Artificial Intelligence (AI) and the attention it has garnered in research circles. A notable achievement in the area was Deep Learning (DL) application in computer vision with outstanding daily life applications such as face-recognition, object detection, and more. However, robotics in general still depend on human inputs in certain areas such as localization, navigation, etc. In this paper, we propose a study case of robot navigation based on deep reinforcement technology. We look into the benefits of switching from traditional ROS-based navigation algorithms towards machine learning approaches and methods. We describe the state-of-the-art technology by introducing the concepts of Reinforcement Learning (RL), Deep Learning (DL) and DRL before before focusing on visual navigation based on DRL. The case study preludes further real life deployment in which mobile navigational agent learns to navigate unbeknownst areas.

  • PDF

Hybrid Learning for Vision-and-Language Navigation Agents (시각-언어 이동 에이전트를 위한 복합 학습)

  • Oh, Suntaek;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.9
    • /
    • pp.281-290
    • /
    • 2020
  • The Vision-and-Language Navigation(VLN) task is a complex intelligence problem that requires both visual and language comprehension skills. In this paper, we propose a new learning model for visual-language navigation agents. The model adopts a hybrid learning that combines imitation learning based on demo data and reinforcement learning based on action reward. Therefore, this model can meet both problems of imitation learning that can be biased to the demo data and reinforcement learning with relatively low data efficiency. In addition, the proposed model uses a novel path-based reward function designed to solve the problem of existing goal-based reward functions. In this paper, we demonstrate the high performance of the proposed model through various experiments using both Matterport3D simulation environment and R2R benchmark dataset.

Road Recognition based Extended Kalman Filter with Multi-Camera and LRF (다중카메라와 레이저스캐너를 이용한 확장칼만필터 기반의 노면인식방법)

  • Byun, Jae-Min;Cho, Yong-Suk;Kim, Sung-Hoon
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.2
    • /
    • pp.182-188
    • /
    • 2011
  • This paper describes a method of road tracking by using a vision and laser with extracting road boundary (road lane and curb) for navigation of intelligent transport robot in structured road environments. Road boundary information plays a major role in developing such intelligent robot. For global navigation, we use a global positioning system achieved by means of a global planner and local navigation accomplished with recognizing road lane and curb which is road boundary on the road and estimating the location of lane and curb from the current robot with EKF(Extended Kalman Filter) algorithm in the road assumed that it has prior information. The complete system has been tested on the electronic vehicles which is equipped with cameras, lasers, GPS. Experimental results are presented to demonstrate the effectiveness of the combined laser and vision system by our approach for detecting the curb of road and lane boundary detection.

Design and Implementation of Unmanned Surface Vehicle JEROS for Jellyfish Removal (해파리 퇴치용 자율 수상 로봇의 설계 및 구현)

  • Kim, Donghoon;Shin, Jae-Uk;Kim, Hyongjin;Kim, Hanguen;Lee, Donghwa;Lee, Seung-Mok;Myung, Hyun
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.1
    • /
    • pp.51-57
    • /
    • 2013
  • Recently, the number of jellyfish has been rapidly grown because of the global warming, the increase of marine structures, pollution, and etc. The increased jellyfish is a threat to the marine ecosystem and induces a huge damage to fishery industries, seaside power plants, and beach industries. To overcome this problem, a manual jellyfish dissecting device and pump system for jellyfish removal have been developed by researchers. However, the systems need too many human operators and their benefit to cost is not so good. Thus, in this paper, the design, implementation, and experiments of autonomous jellyfish removal robot system, named JEROS, have been presented. The JEROS consists of an unmanned surface vehicle (USV), a device for jellyfish removal, an electrical control system, an autonomous navigation system, and a vision-based jellyfish detection system. The USV was designed as a twin hull-type ship, and a jellyfish removal device consists of a net for gathering jellyfish and a blades-equipped propeller for dissecting jellyfish. The autonomous navigation system starts by generating an efficient path for jellyfish removal when the location of jellyfish is received from a remote server or recognized by a vision system. The location of JEROS is estimated by IMU (Inertial Measurement Unit) and GPS, and jellyfish is eliminated while tracking the path. The performance of the vision-based jellyfish recognition, navigation, and jellyfish removal was demonstrated through field tests in the Masan and Jindong harbors in the southern coast of Korea.

Multi-Range Approach of Stereo Vision for Mobile Robot Navigation in Uncertain Environments

  • Park, Kwang-Ho;Kim, Hyung-O;Baek, Moon-Yeol;Kee, Chang-Doo
    • Journal of Mechanical Science and Technology
    • /
    • v.17 no.10
    • /
    • pp.1411-1422
    • /
    • 2003
  • The detection of free spaces between obstacles in a scene is a prerequisite for navigation of a mobile robot. Especially for stereo vision-based navigation, the problem of correspondence between two images is well known to be of crucial importance. This paper describes multi-range approach of area-based stereo matching for grid mapping and visual navigation in uncertain environment. Camera calibration parameters are optimized by evolutionary algorithm for successful stereo matching. To obtain reliable disparity information from both images, stereo images are to be decomposed into three pairs of images with different resolution based on measurement of disparities. The advantage of multi-range approach is that we can get more reliable disparity in each defined range because disparities from high resolution image are used for farther object a while disparities from low resolution images are used for close objects. The reliable disparity map is combined through post-processing for rejecting incorrect disparity information from each disparity map. The real distance from a disparity image is converted into an occupancy grid representation of a mobile robot. We have investigated the possibility of multi-range approach for the detection of obstacles and visual mapping through various experiments.

Vehicular Cooperative Navigation Based on H-SPAWN Using GNSS, Vision, and Radar Sensors (GNSS, 비전 및 레이더를 이용한 H-SPAWN 알고리즘 기반 자동차 협력 항법시스템)

  • Ko, Hyunwoo;Kong, Seung-Hyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.11
    • /
    • pp.2252-2260
    • /
    • 2015
  • In this paper, we propose a vehicular cooperative navigation system using GNSS, vision sensor and radar sensor that are frequently used in mass-produced cars. The proposed cooperative vehicular navigation system is a variant of the Hybrid-Sum Product Algorithm over Wireless Network (H-SPAWN), where we use vision and radar sensors instead of radio ranging(i.e.,UWB). The performance is compared and analyzed with respect to the sensors, especially the position estimation error decreased about fifty percent when using radar compared to vision and radio ranging. In conclusion, the proposed system with these popular sensors can improve position accuracy compared to conventional cooperative navigation system(i.e.,H-SPAWN) and decrease implementation costs.

Estimation of Angular Acceleration By a Monocular Vision Sensor

  • Lim, Joonhoo;Kim, Hee Sung;Lee, Je Young;Choi, Kwang Ho;Kang, Sung Jin;Chun, Sebum;Lee, Hyung Keun
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.3 no.1
    • /
    • pp.1-10
    • /
    • 2014
  • Recently, monitoring of two-body ground vehicles carrying extremely hazardous materials has been considered as one of the most important national issues. This issue induces large cost in terms of national economy and social benefit. To monitor and counteract accidents promptly, an efficient methodology is required. For accident monitoring, GPS can be utilized in most cases. However, it is widely known that GPS cannot provide sufficient continuity in urban cannons and tunnels. To complement the weakness of GPS, this paper proposes an accident monitoring method based on a monocular vision sensor. The proposed method estimates angular acceleration from a sequence of image frames captured by a monocular vision sensor. The possibility of using angular acceleration is investigated to determine the occurrence of accidents such as jackknifing and rollover. By an experiment based on actual measurements, the feasibility of the proposed method is evaluated.

Command Fusion for Navigation of Mobile Robots in Dynamic Environments with Objects

  • Jin, Taeseok
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.1
    • /
    • pp.24-29
    • /
    • 2013
  • In this paper, we propose a fuzzy inference model for a navigation algorithm for a mobile robot that intelligently searches goal location in unknown dynamic environments. Our model uses sensor fusion based on situational commands using an ultrasonic sensor. Instead of using the "physical sensor fusion" method, which generates the trajectory of a robot based upon the environment model and sensory data, a "command fusion" method is used to govern the robot motions. The navigation strategy is based on a combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance based on a hierarchical behavior-based control architecture. To identify the environments, a command fusion technique is introduced where the sensory data of the ultrasonic sensors and a vision sensor are fused into the identification process. The result of experiment has shown that highlights interesting aspects of the goal seeking, obstacle avoiding, decision making process that arise from navigation interaction.