• Title/Summary/Keyword: Vision Navigation System

Search Result 194, Processing Time 0.025 seconds

Design and Implementation of Unmanned Surface Vehicle JEROS for Jellyfish Removal (해파리 퇴치용 자율 수상 로봇의 설계 및 구현)

  • Kim, Donghoon;Shin, Jae-Uk;Kim, Hyongjin;Kim, Hanguen;Lee, Donghwa;Lee, Seung-Mok;Myung, Hyun
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.1
    • /
    • pp.51-57
    • /
    • 2013
  • Recently, the number of jellyfish has been rapidly grown because of the global warming, the increase of marine structures, pollution, and etc. The increased jellyfish is a threat to the marine ecosystem and induces a huge damage to fishery industries, seaside power plants, and beach industries. To overcome this problem, a manual jellyfish dissecting device and pump system for jellyfish removal have been developed by researchers. However, the systems need too many human operators and their benefit to cost is not so good. Thus, in this paper, the design, implementation, and experiments of autonomous jellyfish removal robot system, named JEROS, have been presented. The JEROS consists of an unmanned surface vehicle (USV), a device for jellyfish removal, an electrical control system, an autonomous navigation system, and a vision-based jellyfish detection system. The USV was designed as a twin hull-type ship, and a jellyfish removal device consists of a net for gathering jellyfish and a blades-equipped propeller for dissecting jellyfish. The autonomous navigation system starts by generating an efficient path for jellyfish removal when the location of jellyfish is received from a remote server or recognized by a vision system. The location of JEROS is estimated by IMU (Inertial Measurement Unit) and GPS, and jellyfish is eliminated while tracking the path. The performance of the vision-based jellyfish recognition, navigation, and jellyfish removal was demonstrated through field tests in the Masan and Jindong harbors in the southern coast of Korea.

Particle Filters using Gaussian Mixture Models for Vision-Based Navigation (영상 기반 항법을 위한 가우시안 혼합 모델 기반 파티클 필터)

  • Hong, Kyungwoo;Kim, Sungjoong;Bang, Hyochoong;Kim, Jin-Won;Seo, Ilwon;Pak, Chang-Ho
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.47 no.4
    • /
    • pp.274-282
    • /
    • 2019
  • Vision-based navigation of unmaned aerial vehicle is a significant technology that can reinforce the vulnerability of the widely used GPS/INS integrated navigation system. However, the existing image matching algorithms are not suitable for matching the aerial image with the database. For the reason, this paper proposes particle filters using Gaussian mixture models to deal with matching between aerial image and database for vision-based navigation. The particle filters estimate the position of the aircraft by comparing the correspondences of aerial image and database under the assumption of Gaussian mixture model. Finally, Monte Carlo simulation is presented to demonstrate performance of the proposed method.

Road Recognition based Extended Kalman Filter with Multi-Camera and LRF (다중카메라와 레이저스캐너를 이용한 확장칼만필터 기반의 노면인식방법)

  • Byun, Jae-Min;Cho, Yong-Suk;Kim, Sung-Hoon
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.2
    • /
    • pp.182-188
    • /
    • 2011
  • This paper describes a method of road tracking by using a vision and laser with extracting road boundary (road lane and curb) for navigation of intelligent transport robot in structured road environments. Road boundary information plays a major role in developing such intelligent robot. For global navigation, we use a global positioning system achieved by means of a global planner and local navigation accomplished with recognizing road lane and curb which is road boundary on the road and estimating the location of lane and curb from the current robot with EKF(Extended Kalman Filter) algorithm in the road assumed that it has prior information. The complete system has been tested on the electronic vehicles which is equipped with cameras, lasers, GPS. Experimental results are presented to demonstrate the effectiveness of the combined laser and vision system by our approach for detecting the curb of road and lane boundary detection.

3D Orientation and Position Tracking System of Surgical Instrument with Optical Tracker and Internal Vision Sensor (광추적기와 내부 비전센서를 이용한 수술도구의 3차원 자세 및 위치 추적 시스템)

  • Joe, Young Jin;Oh, Hyun Min;Kim, Min Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.8
    • /
    • pp.579-584
    • /
    • 2016
  • When surgical instruments are tracked in an image-guided surgical navigation system, a stereo vision system with high accuracy is generally used, which is called optical tracker. However, this optical tracker has the disadvantage that a line-of-sight between the tracker and surgical instrument must be maintained. Therefore, to complement the disadvantage of optical tracking systems, an internal vision sensor is attached to a surgical instrument in this paper. Monitoring the target marker pattern attached on patient with this vision sensor, this surgical instrument is possible to be tracked even when the line-of-sight of the optical tracker is occluded. To verify the system's effectiveness, a series of basic experiments is carried out. Lastly, an integration experiment is conducted. The experimental results show that rotational error is bounded to max $1.32^{\circ}$ and mean $0.35^{\circ}$, and translation error is in max 1.72mm and mean 0.58mm. Finally, it is confirmed that the proposed tool tracking method using an internal vision sensor is useful and effective to overcome the occlusion problem of the optical tracker.

A Study on the Construction of Omnidirecional Vision System for the Mobile Robot's the Autonomous Navigation (이동로봇의 자율주행을 위한 전방향 비젼 시스템의 구현에 관한 연구)

  • 고민수;한영환;이응혁;홍승홍
    • Proceedings of the IEEK Conference
    • /
    • 2001.06e
    • /
    • pp.17-20
    • /
    • 2001
  • This study is regarding the autonomous navigation of the mobile robot which operates through a sensor, the Omnnidirectional Vision System which makes it possible to retrieve the real-time movements of the objects and the walls accessing the robot from all directions and to shorten the processing time. After attempting to extend the field of view by using the reflection system and then learning the point of all directions of 2$\pi$ from the robot at the distance, robot recognizes three-dimensional world through the simple image process, the transform procedure and constant monitoring of the angle and distance from the peripheral obstacles. This study consists of 3 parts: Part 1 regards the process of designing Omnnidirectional Vision System and part 2 the image process, and part 3 estimates the implementation system through the comparative study process and three-dimensional measurements.

  • PDF

A Hybrid Positioning System for Indoor Navigation on Mobile Phones using Panoramic Images

  • Nguyen, Van Vinh;Lee, Jong-Weon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.3
    • /
    • pp.835-854
    • /
    • 2012
  • In this paper, we propose a novel positioning system for indoor navigation which helps a user navigate easily to desired destinations in an unfamiliar indoor environment using his mobile phone. The system requires only the user's mobile phone with its basic equipped sensors such as a camera and a compass. The system tracks user's positions and orientations using a vision-based approach that utilizes $360^{\circ}$ panoramic images captured in the environment. To improve the robustness of the vision-based method, we exploit a digital compass that is widely installed on modern mobile phones. This hybrid solution outperforms existing mobile phone positioning methods by reducing the error of position estimation to around 0.7 meters. In addition, to enable the proposed system working independently on mobile phone without the requirement of additional hardware or external infrastructure, we employ a modified version of a fast and robust feature matching scheme using Histogrammed Intensity Patch. The experiments show that the proposed positioning system achieves good performance while running on a mobile phone with a responding time of around 1 second.

REPRESENTATION OF NAVIGATION INFORMATION FOR VISUAL CAR NAVIGATION SYSTEM

  • Joo, In-Hak;Lee, Seung-Yong;Cho, Seong-Ik
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.508-511
    • /
    • 2007
  • Car navigation system is one of the most important applications in telematics. A newest trend of car navigation system is using real video captured by camera equipped on the vehicle, because video can overcome the semantic gap between map and real world. In this paper, we suggest a visual car navigation system that visually represents navigation information or route guidance. It can improve drivers' understanding about real world by capturing real-time video and displaying navigation information overlaid on it. Main services of the visual car navigation system are graphical turn guidance and lane change guidance. We suggest the system architecture that implements the services by integrating conventional route finding and guidance, computer vision functions, and augmented reality display functions. What we designed as a core part of the system is visual navigation controller, which controls other modules and dynamically determines visual representation methods of navigation information according to a determination rule based on current location and driving circumstances. We briefly show the implementation of system.

  • PDF

Observability Analysis of a Vision-INS Integrated Navigation System Using Landmark (비전센서와 INS 기반의 항법 시스템 구현 시 랜드마크 사용에 따른 가관측성 분석)

  • Won, Dae-Hee;Chun, Se-Bum;Sung, Sang-Kyung;Cho, Jin-Soo;Lee, Young-Jae
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.38 no.3
    • /
    • pp.236-242
    • /
    • 2010
  • A GNSS/INS integration system can not provide navigation solutions if there are no available satellites. To overcome this problem, a vision sensor is integrated with this system. Since generally a vision aided integration system uses only feature point to compute navigation solutions, it has a problem in observability. In this case, additional landmarks, which is priory known points, can improve the observability. In this paper, the observability is evaluated using TOM/SOM matrix and Eigenvalues. There are always the observability problems in the feature-point-only case, but the landmark-use case is fully observable after the $2^{nd}$ update time. Consequently the landmarks ensure full observability, so the system performance can be improved.

Design for Back-up of Ship's Navigation System using UAV in Radio Frequency Interference Environment (전파간섭환경에서 UAV를 활용한 선박의 백업항법시스템 설계)

  • Park, Sul Gee;Son, Pyo-Woong
    • Journal of Advanced Navigation Technology
    • /
    • v.23 no.4
    • /
    • pp.289-295
    • /
    • 2019
  • Maritime back-up navigation system in port approach requires a horizontal accuracy of 10 meters in IALA (International Association of Lighthouse Authorities) recommendations. eLoran which is a best back-up navigation system that satisfies accuracy requirement has poor navigation performance depending signal environments. Especially, noise caused by multipath and electronic devices around eLoran antenna affects navigation performance. In this paper, Ship based Navigation Back-up system using UAV on Interference is designed to satisfy horizontal accuracy requirement. To improve the eLoran signal environment, UAVs are equipped with camera, IMU sensor and eLoran antenna and receivers. This proposed system is designed to receive eLoran signal through UAV-based receiver and control UAV's position and attitude within Landmark around area. The ship-based positioning using eLoran signal, vision and attitude information received from UAV satisfy resilient and robust navigation requirements.

Autonomous-flight Drone Algorithm use Computer vision and GPS (컴퓨터 비전과 GPS를 이용한 드론 자율 비행 알고리즘)

  • Kim, Junghwan;Kim, Shik
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.3
    • /
    • pp.193-200
    • /
    • 2016
  • This paper introduces an algorithm to middle-low price drone's autonomous navigation flight system using computer vision and GPS. Existing drone operative system mainly contains using methods such as, by inputting course of the path to the installed software of the particular drone in advance of the flight or following the signal that is transmitted from the controller. However, this paper introduces new algorithm that allows autonomous navigation flight system to locate specific place, specific shape of the place and specific space in an area that the user wishes to discover. Technology developed for military industry purpose was implemented on a lower-quality hobby drones without changing its hardware, and used this paper's algorithm to maximize the performance. Camera mounted on middle-low price drone will process the image which meets user's needs will look through and search for specific area of interest when the user inputs certain image of places it wishes to find. By using this algorithm, middle-low price drone's autonomous navigation flight system expect to be apply to a variety of industries.