• Title/Summary/Keyword: Vision navigation

Search Result 313, Processing Time 0.028 seconds

RESEARCH ON AUTONOMOUS LAND VEHICLE FOR AGRICULTURE

  • Matsuo, Yosuke;Yukumoto, Isamu
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1993.10a
    • /
    • pp.810-819
    • /
    • 1993
  • An autonomous lan vehicle for agriculture(ALVA-II) was developed. A prototype vehicle was made by modifying a commercial tractor. A Navigation sensor system with a geo-magnetic sensor performed the autonomous operations of ALVA-II, such as rotary tilling with headland turnings. A navigation sensor system with a machine vision system was also investigated to control ALVA-II following a work boudnary.

  • PDF

Particle Filter Based Feature Points Tracking for Vision Based Navigation System (영상기반항법을 위한 파티클 필터 기반의 특징점 추적 필터 설계)

  • Won, Dae-Hee;Sung, Sang-Kyung;Lee, Young-Jae
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.40 no.1
    • /
    • pp.35-42
    • /
    • 2012
  • In this study, a feature-points-tracking algorithm is suggested using a particle filter for vision based navigation system. By applying a dynamic model of the feature point, the tracking performance is increased in high dynamic condition, whereas a conventional KLT (Kanade-Lucas-Tomasi) cannot give a solution. Futhermore, the particle filter is introduced to cope with irregular characteristics of vision data. Post-processing of recorded vision data shows that the tracking performance of suggested algorithm is more robust than that of KLT in high dynamic condition.

Integrated System for Autonomous Proximity Operations and Docking

  • Lee, Dae-Ro;Pernicka, Henry
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.12 no.1
    • /
    • pp.43-56
    • /
    • 2011
  • An integrated system composed of guidance, navigation and control (GNC) system for autonomous proximity operations and the docking of two spacecraft was developed. The position maneuvers were determined through the integration of the state-dependent Riccati equation formulated from nonlinear relative motion dynamics and relative navigation using rendezvous laser vision (Lidar) and a vision sensor system. In the vision sensor system, a switch between sensors was made along the approach phase in order to provide continuously effective navigation. As an extension of the rendezvous laser vision system, an automated terminal guidance scheme based on the Clohessy-Wiltshire state transition matrix was used to formulate a "V-bar hopping approach" reference trajectory. A proximity operations strategy was then adapted from the approach strategy used with the automated transfer vehicle. The attitude maneuvers, determined from a linear quadratic Gaussian-type control including quaternion based attitude estimation using star trackers or a vision sensor system, provided precise attitude control and robustness under uncertainties in the moments of inertia and external disturbances. These functions were then integrated into an autonomous GNC system that can perform proximity operations and meet all conditions for successful docking. A six-degree of freedom simulation was used to demonstrate the effectiveness of the integrated system.

Real-time Humanoid Robot Trajectory Estimation and Navigation with Stereo Vision (스테레오 비전을 이용한 실시간 인간형 로봇 궤적 추출 및 네비게이션)

  • Park, Ji-Hwan;Jo, Sung-Ho
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.8
    • /
    • pp.641-646
    • /
    • 2010
  • This paper presents algorithms for real-time navigation of a humanoid robot with a stereo vision but no other sensors. Using the algorithms, a robot can recognize its 3D environment by retrieving SIFT features from images, estimate its position through the Kalman filter, and plan its path to reach a destination avoiding obstacles. Our approach focuses on estimating the robot’s central walking path trajectory rather than its actual walking motion by using an approximate model. This strategy makes it possible to apply mobile robot localization approaches to humanoid robot localization. Simple collision free path planning and motion control enable the autonomous robot navigation. Experimental results demonstrate the feasibility of our approach.

A Vision Based Guideline Interpretation Technique for AGV Navigation (AGV 운행을 위한 비전기반 유도선 해석 기술)

  • Byun, Sungmin;Kim, Minhwan
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.11
    • /
    • pp.1319-1329
    • /
    • 2012
  • AGVs are more and more utilized nowadays and magnetic guided AGVs are most widely used because their system has low cost and high speed. But this type of AGVs requires high infrastructure building cost and has poor flexibility of navigation path layout changing. Thus it is hard to applying this type of AGVs to a small quantity batch production system or a cooperative production system with many AGVs. In this paper, we propose a vision based guideline interpretation technique that uses the cheap, easily installable and changeable color tapes (or paint) as a guideline. So a vision-based AGV with color tapes is effectively applicable to the production systems. For easy setting and changing of AGV navigation path, we suggest an automatic method for interpreting a complex guideline layout including multi-branches and joins of branches. We also suggest a trace direction decision method for stable navigation of AGVs. Through several real-time navigation tests with an industrial AGV installed with the suggested technique, we confirmed that the technique is practically and stably applicable to real industrial field.

Vision-based Navigation using Semantically Segmented Aerial Images (의미론적 분할된 항공 사진을 활용한 영상 기반 항법)

  • Hong, Kyungwoo;Kim, Sungjoong;Park, Junwoo;Bang, Hyochoong;Heo, Junhoe;Kim, Jin-Won;Pak, Chang-Ho;Seo, Songwon
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.48 no.10
    • /
    • pp.783-789
    • /
    • 2020
  • This paper proposes a new method for vision-based navigation using semantically segmented aerial images. Vision-based navigation can reinforce the vulnerability of the GPS/INS integrated navigation system. However, due to the visual and temporal difference between the aerial image and the database image, the existing image matching algorithms have difficulties being applied to aerial navigation problems. For this reason, this paper proposes a suitable matching method for the flight composed of navigational feature extraction through semantic segmentation followed by template matching. The proposed method shows excellent performance in simulation and even flight situations.

A Study on Obstacle Detection for Mobile Robot Navigation (이동형 로보트 주행을 위한 장애물 검출에 관한 연구)

  • Yun, Ji-Ho;Woo, Dong-Min
    • Proceedings of the KIEE Conference
    • /
    • 1995.11a
    • /
    • pp.587-589
    • /
    • 1995
  • The safe navigation of a mobile robot requires the recognition of the environment in terms of vision processing. To be guided in the given path, the robot should acquire the information about where the wall and corridor are located. Also unexpected obstacles should be detected as rapid as possible for the safe obstacle avoidance. In the paper, we assume that the mobile robot should be navigated in the flat surface. In terms of this assumption we simplify the correspondence problem by the free navigation surface and matching features in that coordinate system. Basically, the vision processing system adopts line segment of edge as the feature. The extracted line segments of edge out of both image are matched in the free nevigation surface. According to the matching result, each line segment is labeled by the attributes regarding obstacle and free surface and the 3D shape of obstacle is interpreted. This proposed vision processing method is verified in terms of various simulations and experimentation using real images.

  • PDF

Scene Recognition based Autonomous Robot Navigation robust to Dynamic Environments (동적 환경에 강인한 장면 인식 기반의 로봇 자율 주행)

  • Kim, Jung-Ho;Kweon, In-So
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.3
    • /
    • pp.245-254
    • /
    • 2008
  • Recently, many vision-based navigation methods have been introduced as an intelligent robot application. However, many of these methods mainly focus on finding an image in the database corresponding to a query image. Thus, if the environment changes, for example, objects moving in the environment, a robot is unlikely to find consistent corresponding points with one of the database images. To solve these problems, we propose a novel navigation strategy which uses fast motion estimation and a practical scene recognition scheme preparing the kidnapping problem, which is defined as the problem of re-localizing a mobile robot after it is undergone an unknown motion or visual occlusion. This algorithm is based on motion estimation by a camera to plan the next movement of a robot and an efficient outlier rejection algorithm for scene recognition. Experimental results demonstrate the capability of the vision-based autonomous navigation against dynamic environments.

  • PDF

Vehicular Cooperative Navigation Based on H-SPAWN Using GNSS, Vision, and Radar Sensors (GNSS, 비전 및 레이더를 이용한 H-SPAWN 알고리즘 기반 자동차 협력 항법시스템)

  • Ko, Hyunwoo;Kong, Seung-Hyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.11
    • /
    • pp.2252-2260
    • /
    • 2015
  • In this paper, we propose a vehicular cooperative navigation system using GNSS, vision sensor and radar sensor that are frequently used in mass-produced cars. The proposed cooperative vehicular navigation system is a variant of the Hybrid-Sum Product Algorithm over Wireless Network (H-SPAWN), where we use vision and radar sensors instead of radio ranging(i.e.,UWB). The performance is compared and analyzed with respect to the sensors, especially the position estimation error decreased about fifty percent when using radar compared to vision and radio ranging. In conclusion, the proposed system with these popular sensors can improve position accuracy compared to conventional cooperative navigation system(i.e.,H-SPAWN) and decrease implementation costs.

VFH+ based Obstacle Avoidance using Monocular Vision of Unmanned Surface Vehicle (무인수상선의 단일 카메라를 이용한 VFH+ 기반 장애물 회피 기법)

  • Kim, Taejin;Choi, Jinwoo;Lee, Yeongjun;Choi, Hyun-Taek
    • Journal of Ocean Engineering and Technology
    • /
    • v.30 no.5
    • /
    • pp.426-430
    • /
    • 2016
  • Recently, many unmanned surface vehicles (USVs) have been developed and researched for various fields such as the military, environment, and robotics. In order to perform purpose specific tasks, common autonomous navigation technologies are needed. Obstacle avoidance is important for safe autonomous navigation. This paper describes a vector field histogram+ (VFH+) based obstacle avoidance method that uses the monocular vision of an unmanned surface vehicle. After creating a polar histogram using VFH+, an open space without the histogram is selected in the moving direction. Instead of distance sensor data, monocular vision data are used for make the polar histogram, which includes obstacle information. An object on the water is recognized as an obstacle because this method is for USV. The results of a simulation with sea images showed that we can verify a change in the moving direction according to the position of objects.