• Title/Summary/Keyword: Vision Navigation System

Search Result 194, Processing Time 0.031 seconds

VFH-based Navigation using Monocular Vision (단일 카메라를 이용한 VFH기반의 실시간 주행 기술 개발)

  • Park, Se-Hyun;Hwang, Ji-Hye;Ju, Jin-Sun;Ko, Eun-Jeong;Ryu, Juang-Tak;Kim, Eun-Yi
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.2
    • /
    • pp.65-72
    • /
    • 2011
  • In this paper, a real-time monocular vision based navigation system is developed for the disabled people, where online background learning and vector field histogram are used for identifying obstacles and recognizing avoidable paths. The proposed system is performed by three steps: obstacle classification, occupancy grid map generation and VFH-based path recommendation. Firstly, the obstacles are discriminated from images by subtracting with background model which is learned in real time. Thereafter, based on the classification results, an occupancy map sized at $32{\times}24$ is produced, each cell of which represents its own risk by 10 gray levels. Finally, the polar histogram is drawn from the occupancy map, then the sectors corresponding to the valley are chosen as safe paths. To assess the effectiveness of the proposed system, it was tested with a variety of obstacles at indoors and outdoors, then it showed the a'ccuracy of 88%. Moreover, it showed the superior performance when comparing with sensor based navigation systems, which proved the feasibility of the proposed system in using assistive devices of disabled people.

Representing Navigation Information on Real-time Video in Visual Car Navigation System

  • Joo, In-Hak;Lee, Seung-Yong;Cho, Seong-Ik
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.5
    • /
    • pp.365-373
    • /
    • 2007
  • Car navigation system is a key application in geographic information system and telematics. A recent trend of car navigation system is using real video captured by camera equipped on the vehicle, because video has more representation power about real world than conventional map. In this paper, we suggest a visual car navigation system that visually represents route guidance. It can improve drivers' understanding about real world by capturing real-time video and displaying navigation information overlaid directly on the video. The system integrates real-time data acquisition, conventional route finding and guidance, computer vision, and augmented reality display. We also designed visual navigation controller, which controls other modules and dynamically determines visual representation methods of navigation information according to current location and driving circumstances. We briefly show implementation of the system.

Forest Fire Detection System using Drone Streaming Images (드론 스트리밍 영상 이미지 분석을 통한 실시간 산불 탐지 시스템)

  • Yoosin Kim
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.5
    • /
    • pp.685-689
    • /
    • 2023
  • The proposed system in the study aims to detect forest fires in real-time stream data received from the drone-camera. Recently, the number of wildfires has been increasing, and also the large scaled wildfires are frequent more and more. In order to prevent forest fire damage, many experiments using the drone camera and vision analysis are actively conducted, however there were many challenges, such as network speed, pre-processing, and model performance, to detect forest fires from real-time streaming data of the flying drone. Therefore, this study applied image data processing works to capture five good image frames for vision analysis from whole streaming data and then developed the object detection model based on YOLO_v2. As the result, the classification model performance of forest fire images reached upto 93% of accuracy, and the field test for the model verification detected the forest fire with about 70% accuracy.

Performance Analysis of Vision-based Positioning Assistance Algorithm (비전 기반 측위 보조 알고리즘의 성능 분석)

  • Park, Jong Soo;Lee, Yong;Kwon, Jay Hyoun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.3
    • /
    • pp.101-108
    • /
    • 2019
  • Due to recent improvements in computer processing speed and image processing technology, researches are being actively carried out to combine information from camera with existing GNSS (Global Navigation Satellite System) and dead reckoning. In this study, developed a vision-based positioning assistant algorithm to estimate the distance to the object from stereo images. In addition, GNSS/on-board vehicle sensor/vision based positioning algorithm is developed by combining vision based positioning algorithm with existing positioning algorithm. For the performance analysis, the velocity calculated from the actual driving test was used for the navigation solution correction, simulation tests were performed to analyse the effects of velocity precision. As a result of analysis, it is confirmed that about 4% of position accuracy is improved when vision information is added compared to existing GNSS/on-board based positioning algorithm.

An Optimal Position and Orientation of Stereo Camera (스테레오 카메라의 최적 위치 및 방향)

  • Choi, Hyeung-Sik;Kim, Hwan-Sung;Shin, Hee-Young;Jung, Sung-Hun
    • Journal of Advanced Navigation Technology
    • /
    • v.17 no.3
    • /
    • pp.354-360
    • /
    • 2013
  • A stereo vision analysis was performed for motion and depth control of unmanned vehicles. In stereo vision, the depth information in three-dimensional coordinates can be obtained by triangulation after identifying points between the stereo image. However, there are always triangulation errors due to several reasons. Such errors in the vision triangulation can be alleviated by careful arrangement of the camera position and orientation. In this paper, an approach to the determination of the optimal position and orientation of camera is presented for unmanned vehicles.

Development of Vision-based Lateral Control System for an Autonomous Navigation Vehicle (자율주행차량을 위한 비젼 기반의 횡방향 제어 시스템 개발)

  • Rho Kwanghyun;Steux Bruno
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.13 no.4
    • /
    • pp.19-25
    • /
    • 2005
  • This paper presents a lateral control system for the autonomous navigation vehicle that was developed and tested by Robotics Centre of Ecole des Mines do Paris in France. A robust lane detection algorithm was developed for detecting different types of lane marker in the images taken by a CCD camera mounted on the vehicle. $^{RT}Maps$ that is a software framework far developing vision and data fusion applications, especially in a car was used for implementing lane detection and lateral control. The lateral control has been tested on the urban road in Paris and the demonstration has been shown to the public during IEEE Intelligent Vehicle Symposium 2002. Over 100 people experienced the automatic lateral control. The demo vehicle could run at a speed of 130km1h in the straight road and 50km/h in high curvature road stably.

Implementation of Enhanced Vision for an Autonomous Map-based Robot Navigation

  • Roland, Cubahiro;Choi, Donggyu;Kim, Minyoung;Jang, Jongwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.41-43
    • /
    • 2021
  • Robot Operating System (ROS) has been a prominent and successful framework used in robotics business and academia.. However, the framework has long been focused and limited to navigation of robots and manipulation of objects in the environment. This focus leaves out other important field such as speech recognition, vision abilities, etc. Our goal is to take advantage of ROS capacity to integrate additional libraries of programming functions aimed at real-time computer vision with a depth-image camera. In this paper we will focus on the implementation of an upgraded vision with the help of a depth camera which provides a high quality data for a much enhanced and accurate understanding of the environment. The varied data from the cameras are then incorporated in ROS communication structure for any potential use. For this particular case, the system will use OpenCV libraries to manipulate the data from the camera and provide a face-detection capabilities to the robot, while navigating an indoor environment. The whole system has been implemented and tested on the latest technologies of Turtlebot3 and Raspberry Pi4.

  • PDF

Vision-based Navigation using Semantically Segmented Aerial Images (의미론적 분할된 항공 사진을 활용한 영상 기반 항법)

  • Hong, Kyungwoo;Kim, Sungjoong;Park, Junwoo;Bang, Hyochoong;Heo, Junhoe;Kim, Jin-Won;Pak, Chang-Ho;Seo, Songwon
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.48 no.10
    • /
    • pp.783-789
    • /
    • 2020
  • This paper proposes a new method for vision-based navigation using semantically segmented aerial images. Vision-based navigation can reinforce the vulnerability of the GPS/INS integrated navigation system. However, due to the visual and temporal difference between the aerial image and the database image, the existing image matching algorithms have difficulties being applied to aerial navigation problems. For this reason, this paper proposes a suitable matching method for the flight composed of navigational feature extraction through semantic segmentation followed by template matching. The proposed method shows excellent performance in simulation and even flight situations.

Position estimation and navigation control of mobile robot using mono vision (단일 카메라를 이용한 이동 로봇의 위치 추정과 주행 제어)

  • Lee, Ki-Chul;Lee, Sung-Ryul;Park, Min-Yong;Kim, Hyun-Tai;Kho, Jae-Won
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.5 no.5
    • /
    • pp.529-539
    • /
    • 1999
  • This paper suggests a new image analysis method and indoor navigation control algorithm of mobile robots using a mono vision system. In order to reduce the positional uncertainty which is generated as the robot travels around the workspace, we propose a new visual landmark recognition algorithm with 2-D graph world model which describes the workspace as only a rough plane figure. The suggested algorithm is implemented to our mobile robot and experimented in a real corridor using extended Kalman filter. The validity and performance of the proposed algorithm was verified by showing that the trajectory deviation error was maintained under 0.075m and the position estimation error was sustained under 0.05m in the resultant trajectory of the navigation.

  • PDF

Towing Tank Test assuming the Collision between Ice-going Ship and Ice Floe and Measurement of Ice Floe's Motion using Machine Vision Inspection (내빙선과 유빙의 충돌을 가정한 예인수조실험 및 머신비전검사를 이용한 유빙의 운동 계측)

  • Kim, Hyo-Il;Jun, Seung-Hwan
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2015.10a
    • /
    • pp.33-34
    • /
    • 2015
  • The voyage and cargo volume passing through the Arctic route (NSR) have been gradually increased. The ship-ice collision is one of the most biggest factors threatening the safety navigation of ice-going ships. A lot of researchers are trying to reveal the ship-ice collision mechanism. In this study, some tests that a model ship is forced to collide with disk-shaped synthetic ice are carried out in a towing tank. Then, ice floe's motion (velocity and trajectory) is measured by machine vision inspection.

  • PDF