• Title/Summary/Keyword: vision based navigation

Search Result 192, Processing Time 0.023 seconds

Forest Fire Detection System using Drone Streaming Images (드론 스트리밍 영상 이미지 분석을 통한 실시간 산불 탐지 시스템)

  • Yoosin Kim
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.5
    • /
    • pp.685-689
    • /
    • 2023
  • The proposed system in the study aims to detect forest fires in real-time stream data received from the drone-camera. Recently, the number of wildfires has been increasing, and also the large scaled wildfires are frequent more and more. In order to prevent forest fire damage, many experiments using the drone camera and vision analysis are actively conducted, however there were many challenges, such as network speed, pre-processing, and model performance, to detect forest fires from real-time streaming data of the flying drone. Therefore, this study applied image data processing works to capture five good image frames for vision analysis from whole streaming data and then developed the object detection model based on YOLO_v2. As the result, the classification model performance of forest fire images reached upto 93% of accuracy, and the field test for the model verification detected the forest fire with about 70% accuracy.

Path Planning Algorithm for UGVs Based on the Edge Detecting and Limit-cycle Navigation Method (Limit-cycle 항법과 모서리 검출을 기반으로 하는 UGV를 위한 계획 경로 알고리즘)

  • Lim, Yun-Won;Jeong, Jin-Su;An, Jin-Ung;Kim, Dong-Han
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.5
    • /
    • pp.471-478
    • /
    • 2011
  • This UGV (Unmanned Ground Vehicle) is not only widely used in various practical applications but is also currently being researched in many disciplines. In particular, obstacle avoidance is considered one of the most important technologies in the navigation of an unmanned vehicle. In this paper, we introduce a simple algorithm for path planning in order to reach a destination while avoiding a polygonal-shaped static obstacle. To effectively avoid such an obstacle, a path planned near the obstacle is much shorter than a path planned far from the obstacle, on the condition that both paths guarantee that the robot will not collide with the obstacle. So, to generate a path near the obstacle, we have developed an algorithm that combines an edge detection method and a limit-cycle navigation method. The edge detection method, based on Hough Transform and IR sensors, finds an obstacle's edge, and the limit-cycle navigation method generates a path that is smooth enough to reach a detected obstacle's edge. And we proposed novel algorithm to solve local minima using the virtual wall in the local vision. Finally, we verify performances of the proposed algorithm through simulations and experiments.

3D Orientation and Position Tracking System of Surgical Instrument with Optical Tracker and Internal Vision Sensor (광추적기와 내부 비전센서를 이용한 수술도구의 3차원 자세 및 위치 추적 시스템)

  • Joe, Young Jin;Oh, Hyun Min;Kim, Min Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.8
    • /
    • pp.579-584
    • /
    • 2016
  • When surgical instruments are tracked in an image-guided surgical navigation system, a stereo vision system with high accuracy is generally used, which is called optical tracker. However, this optical tracker has the disadvantage that a line-of-sight between the tracker and surgical instrument must be maintained. Therefore, to complement the disadvantage of optical tracking systems, an internal vision sensor is attached to a surgical instrument in this paper. Monitoring the target marker pattern attached on patient with this vision sensor, this surgical instrument is possible to be tracked even when the line-of-sight of the optical tracker is occluded. To verify the system's effectiveness, a series of basic experiments is carried out. Lastly, an integration experiment is conducted. The experimental results show that rotational error is bounded to max $1.32^{\circ}$ and mean $0.35^{\circ}$, and translation error is in max 1.72mm and mean 0.58mm. Finally, it is confirmed that the proposed tool tracking method using an internal vision sensor is useful and effective to overcome the occlusion problem of the optical tracker.

Three Dimensional Obstacle Detection for Indoor Navigation (실내 주행을 위한 3차원 장애물 검출)

  • Ko, Bok-Kyong;Woo, Dong-Min
    • Proceedings of the KIEE Conference
    • /
    • 1996.07b
    • /
    • pp.1251-1253
    • /
    • 1996
  • The vision processing system for mobile robots requires real time processing and reliability for the purpose of safe navigation. But, general types of vision systems are not appropriate owing to the correspondence problem which correlates the points out of two images. To determine the obstacle area, we use correspondences of line segments between two perspective images sequentially acquired by camera. To simplify the correspondence, the matching of line segments are performed in the navigation space, based on the assumption that mobile robot should be navigated in the flat surface and the motion of mobile robot between two frames should be approximately known.

  • PDF

Development of Vision-based Lateral Control System for an Autonomous Navigation Vehicle (자율주행차량을 위한 비젼 기반의 횡방향 제어 시스템 개발)

  • Rho Kwanghyun;Steux Bruno
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.13 no.4
    • /
    • pp.19-25
    • /
    • 2005
  • This paper presents a lateral control system for the autonomous navigation vehicle that was developed and tested by Robotics Centre of Ecole des Mines do Paris in France. A robust lane detection algorithm was developed for detecting different types of lane marker in the images taken by a CCD camera mounted on the vehicle. $^{RT}Maps$ that is a software framework far developing vision and data fusion applications, especially in a car was used for implementing lane detection and lateral control. The lateral control has been tested on the urban road in Paris and the demonstration has been shown to the public during IEEE Intelligent Vehicle Symposium 2002. Over 100 people experienced the automatic lateral control. The demo vehicle could run at a speed of 130km1h in the straight road and 50km/h in high curvature road stably.

Map-Building and Position Estimation based on Multi-Sensor Fusion for Mobile Robot Navigation in an Unknown Environment (이동로봇의 자율주행을 위한 다중센서융합기반의 지도작성 및 위치추정)

  • Jin, Tae-Seok;Lee, Min-Jung;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.5
    • /
    • pp.434-443
    • /
    • 2007
  • Presently, the exploration of an unknown environment is an important task for thee new generation of mobile service robots and mobile robots are navigated by means of a number of methods, using navigating systems such as the sonar-sensing system or the visual-sensing system. To fully utilize the strengths of both the sonar and visual sensing systems. This paper presents a technique for localization of a mobile robot using fusion data of multi-ultrasonic sensors and vision system. The mobile robot is designed for operating in a well-structured environment that can be represented by planes, edges, comers and cylinders in the view of structural features. In the case of ultrasonic sensors, these features have the range information in the form of the arc of a circle that is generally named as RCD(Region of Constant Depth). Localization is the continual provision of a knowledge of position which is deduced from it's a priori position estimation. The environment of a robot is modeled into a two dimensional grid map. we defines a vision-based environment recognition, phisically-based sonar sensor model and employs an extended Kalman filter to estimate position of the robot. The performance and simplicity of the approach is demonstrated with the results produced by sets of experiments using a mobile robot.

A Study of Line Recognition and Driving Direction Control On Vision based AGV (Vision을 이용한 자율주행 로봇의 라인 인식 및 주행방향 결정에 관한 연구)

  • Kim, Young-Suk;Kim, Tae-Wan;Lee, Chang-Goo
    • Proceedings of the KIEE Conference
    • /
    • 2002.07d
    • /
    • pp.2341-2343
    • /
    • 2002
  • This paper describes a vision-based line recognition and control of driving direction for an AGV(autonomous guided vehicle). As navigation guide, black stripe attached on the corridor is used. Binary image of guide stripe captured by a CCD camera is used. For detect the guideline quickly and extractly, we use for variable thresholding algorithm. this low-cost line-tracking system is efficiently using pc-based real time vision processing. steering control is studied through controller with guide-line angle error. This method is tested via a typical agv with a single camera in laboratory environment.

  • PDF

Survey on Visual Navigation Technology for Unmanned Systems (무인 시스템의 자율 주행을 위한 영상기반 항법기술 동향)

  • Kim, Hyoun-Jin;Seo, Hoseong;Kim, Pyojin;Lee, Chung-Keun
    • Journal of Advanced Navigation Technology
    • /
    • v.19 no.2
    • /
    • pp.133-139
    • /
    • 2015
  • This paper surveys vision based autonomous navigation technologies for unmanned systems. Main branches of visual navigation technologies are visual servoing, visual odometry, and visual simultaneous localization and mapping (SLAM). Visual servoing provides velocity input which guides mobile system to desired pose. This input velocity is calculated from feature difference between desired image and acquired image. Visual odometry is the technology that estimates the relative pose between frames of consecutive image. This can improve the accuracy when compared with the exisiting dead-reckoning methods. Visual SLAM aims for constructing map of unknown environment and determining mobile system's location simultaneously, which is essential for operation of unmanned systems in unknown environments. The trend of visual navigation is grasped by examining foreign research cases related to visual navigation technology.

REPRESENTATION OF NAVIGATION INFORMATION FOR VISUAL CAR NAVIGATION SYSTEM

  • Joo, In-Hak;Lee, Seung-Yong;Cho, Seong-Ik
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.508-511
    • /
    • 2007
  • Car navigation system is one of the most important applications in telematics. A newest trend of car navigation system is using real video captured by camera equipped on the vehicle, because video can overcome the semantic gap between map and real world. In this paper, we suggest a visual car navigation system that visually represents navigation information or route guidance. It can improve drivers' understanding about real world by capturing real-time video and displaying navigation information overlaid on it. Main services of the visual car navigation system are graphical turn guidance and lane change guidance. We suggest the system architecture that implements the services by integrating conventional route finding and guidance, computer vision functions, and augmented reality display functions. What we designed as a core part of the system is visual navigation controller, which controls other modules and dynamically determines visual representation methods of navigation information according to a determination rule based on current location and driving circumstances. We briefly show the implementation of system.

  • PDF

Fuzzy Neural Network Based Sensor Fusion and It's Application to Mobile Robot in Intelligent Robotic Space

  • Jin, Tae-Seok;Lee, Min-Jung;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.6 no.4
    • /
    • pp.293-298
    • /
    • 2006
  • In this paper, a sensor fusion based robot navigation method for the autonomous control of a miniature human interaction robot is presented. The method of navigation blends the optimality of the Fuzzy Neural Network(FNN) based control algorithm with the capabilities in expressing knowledge and learning of the networked Intelligent Robotic Space(IRS). States of robot and IR space, for examples, the distance between the mobile robot and obstacles and the velocity of mobile robot, are used as the inputs of fuzzy logic controller. The navigation strategy is based on the combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance. To identify the environments, a sensor fusion technique is introduced, where the sensory data of ultrasonic sensors and a vision sensor are fused into the identification process. Preliminary experiment and results are shown to demonstrate the merit of the introduced navigation control algorithm.