• 제목/요약/키워드: Vision-based Guidance

검색결과 35건 처리시간 0.025초

Light Source Target Detection Algorithm for Vision-based UAV Recovery

  • Won, Dae-Yeon;Tahk, Min-Jea;Roh, Eun-Jung;Shin, Sung-Sik
    • International Journal of Aeronautical and Space Sciences
    • /
    • 제9권2호
    • /
    • pp.114-120
    • /
    • 2008
  • In the vision-based recovery phase, a terminal guidance for the blended-wing UAV requires visual information of high accuracy. This paper presents the light source target design and detection algorithm for vision-based UAV recovery. We propose a recovery target design with red and green LEDs. This frame provides the relative position between the target and the UAV. The target detection algorithm includes HSV-based segmentation, morphology, and blob processing. These techniques are employed to give efficient detection results in day and night net recovery operations. The performance of the proposed target design and detection algorithm are evaluated through ground-based experiments.

탑-뷰 변환과 빔-레이 모델을 이용한 영상기반 보행 안내 시스템 (Vision-based Walking Guidance System Using Top-view Transform and Beam-ray Model)

  • 림청;한영준;한헌수
    • 한국컴퓨터정보학회논문지
    • /
    • 제16권12호
    • /
    • pp.93-102
    • /
    • 2011
  • 본 논문은 야외 환경에서 하나의 카메라를 이용한 시각 장애인을 위한 보행 안내 시스템을 제안한다. 기존의 스테레오 비전을 이용한 보행 지원 시스템과는 다르게 제안된 시스템은 사용자의 허리에 고정된 하나의 카메라를 이용하여 꼭 필요한 정보만을 얻는 것을 목표로 하는 시스템이다. 제안하는 시스템은 먼저 탑-뷰 영상을 생성하고, 생성된 탑-뷰 영상 내 지역적인 코너 극점을 검출한다. 검출된 극점에서 방사형의 히스토그램을 분석하여 장애물을 검출한다. 그리고 사용자 움직임은 사용자에 가까운 지역 안에서 옵티컬 플로우를 사용하여 추정한다. 이렇게 영상으로부터 추출된 정보들을 기반으로 음성 메시지 생성 모듈은 보행 지시 정보를 합성된 음성을 통해 시각 장애인에게 전달한다. 다양한실험 영상들을 사용하여 제안한 보행 안내 시스템이 일반 인도에서 유용한 안내 지시를 제공하는 것이 가능함을 보인다.

REPRESENTATION OF NAVIGATION INFORMATION FOR VISUAL CAR NAVIGATION SYSTEM

  • Joo, In-Hak;Lee, Seung-Yong;Cho, Seong-Ik
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2007년도 Proceedings of ISRS 2007
    • /
    • pp.508-511
    • /
    • 2007
  • Car navigation system is one of the most important applications in telematics. A newest trend of car navigation system is using real video captured by camera equipped on the vehicle, because video can overcome the semantic gap between map and real world. In this paper, we suggest a visual car navigation system that visually represents navigation information or route guidance. It can improve drivers' understanding about real world by capturing real-time video and displaying navigation information overlaid on it. Main services of the visual car navigation system are graphical turn guidance and lane change guidance. We suggest the system architecture that implements the services by integrating conventional route finding and guidance, computer vision functions, and augmented reality display functions. What we designed as a core part of the system is visual navigation controller, which controls other modules and dynamically determines visual representation methods of navigation information according to a determination rule based on current location and driving circumstances. We briefly show the implementation of system.

  • PDF

곡선모델 차선검출 기반의 GPS 횡방향 오차보정 성능향상 기법 (Curve-Modeled Lane Detection based GPS Lateral Error Correction Enhancement)

  • 이병현;임성혁;허문범;지규인
    • 제어로봇시스템학회논문지
    • /
    • 제21권2호
    • /
    • pp.81-86
    • /
    • 2015
  • GPS position errors were corrected for guidance of autonomous vehicles. From the vision, we can obtain the lateral distance from the center of lane and the angle difference between the left and right detected line. By using a controller which makes these two measurements zero, a lane following system can be easily implemented. However, the problem is that if there's no lane, such as crossroad, the guidance system of autonomous vehicle does not work. In addition, Line detection has problems working on curved areas. In this case, the lateral distance measurement has an error because of a modeling mismatch. For this reason, we propose GPS error correction filter based on curve-modeled lane detection and evaluated the performance applying it to an autonomous vehicle at the test site.

무인 항공기의 목표물 추적을 위한 영상 기반 목표물 위치 추정 (Vision Based Estimation of 3-D Position of Target for Target Following Guidance/Control of UAV)

  • 김종훈;이대우;조겸래;조선영;김정호;한동인
    • 제어로봇시스템학회논문지
    • /
    • 제14권12호
    • /
    • pp.1205-1211
    • /
    • 2008
  • This paper describes methods to estimate 3-D position of target with respect to reference frame through monocular image from unmanned aerial vehicle (UAV). 3-D position of target is used as information for surveillance, recognition and attack. In this paper. 3-D position of target is estimated to make guidance and control law, which can follow target, user interested. It is necessary that position of target is measured in image to solve 3-D position of target. In this paper, kalman filter is used to track and output position of target in image. Estimation of target's 3-D position is possible using result of image tracking and information of UAV and camera. To estimate this, two algorithms are used. One is methode from arithmetic derivation of dynamics between UAV, carmer, and target. The other is LPV (Linear Parametric Varying). These methods have been run on simulation, and compared in this paper.

영상 내 사람의 검출을 위한 에지 기반 방법 (Edge-based Method for Human Detection in an Image)

  • 도용태;반종희
    • 센서학회지
    • /
    • 제25권4호
    • /
    • pp.285-290
    • /
    • 2016
  • Human sensing is an important but challenging technology. Unlike other methods for sensing humans, a vision sensor has many advantages, and there has been active research in automatic human detection in camera images. The combination of Histogram of Oriented Gradients (HOG) and Support Vector Machine (SVM) is currently one of the most successful methods in vision-based human detection. However, extracting HOG features from an image is computer intensive, and it is thus hard to employ the HOG method in real-time processing applications. This paper describes an efficient solution to this speed problem of the HOG method. Our method obtains edge information of an image and finds candidate regions where humans very likely exist based on the distribution pattern of the detected edge points. The HOG features are then extracted only from the candidate image regions. Since complex HOG processing is adaptively done by the guidance of the simpler edge detection step, human detection can be performed quickly. Experimental results show that the proposed method is effective in various images.

Visual Tracking Control of Aerial Robotic Systems with Adaptive Depth Estimation

  • Metni, Najib;Hamel, Tarek
    • International Journal of Control, Automation, and Systems
    • /
    • 제5권1호
    • /
    • pp.51-60
    • /
    • 2007
  • This paper describes a visual tracking control law of an Unmanned Aerial Vehicle(UAV) for monitoring of structures and maintenance of bridges. It presents a control law based on computer vision for quasi-stationary flights above a planar target. The first part of the UAV's mission is the navigation from an initial position to a final position to define a desired trajectory in an unknown 3D environment. The proposed method uses the homography matrix computed from the visual information and derives, using backstepping techniques, an adaptive nonlinear tracking control law allowing the effective tracking and depth estimation. The depth represents the desired distance separating the camera from the target.

스테레오 비전센서를 이용한 선행차량 감지 시스템의 개발 (Development of a Vision Sensor-based Vehicle Detection System)

  • 황준연;홍대건;허건수
    • 한국자동차공학회논문집
    • /
    • 제16권6호
    • /
    • pp.134-140
    • /
    • 2008
  • Preceding vehicle detection is a crucial issue for driver assistance system as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision. The vision-based preceded vehicle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an preceded vehicle detection system is developed using stereo vision sensors. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the preceded vehicles including a leading vehicle. Then, the position parameters of the preceded vehicles or leading vehicles can be obtained. The proposed preceded vehicle detection system is implemented on a passenger car and its performances is verified experimentally.

무인항공기의 자동 착륙을 위한 LSM 및 CPA를 활용한 영상 기반 장애물 상태 추정 및 충돌 예측 (Vision-based Obstacle State Estimation and Collision Prediction using LSM and CPA for UAV Autonomous Landing)

  • 이성봉;박천만;김혜지;이동진
    • 한국항행학회논문지
    • /
    • 제25권6호
    • /
    • pp.485-492
    • /
    • 2021
  • 무인항공기의 영상 기반 자동 정밀 착륙 기술은 착륙 지점에 대한 정밀한 위치 추정 기술과 착륙 유도 기술이 요구된다. 또한, 안전한 착륙을 위하여 지상 장애물에 대한 착륙 지점의 안전성을 판단하고, 안전성이 확보된 경우에만 착륙을 유도하도록 설계되어야 한다. 본 논문은 자동 정밀 착륙을 수행하기 위하여 영상 기반의 항법과 착륙 지점의 안전성을 판단하기 위한 알고리즘을 제안한다. 영상 기반 항법을 수행하기 위해 CNN 기법을 활용하여 착륙 패드를 탐지하고, 탐지 정보를 활용하여 통합 항법 해를 도출한다. 또한, 위치 추정 성능을 향상시키기 위한 칼만필터를 설계 및 적용한다. 착륙 지점의 안전성을 판단하기 위하여 동일한 방식으로 장애물 탐지 및 위치 추정을 수행하고, LSM을 활용하여 장애물의 속도를 추정한다. 추정한 장애물의 상태를 활용하여 계산한 CPA를 기반으로 장애물과의 충돌 여부를 판단한다. 최종적으로 본 논문에서 제안된 알고리즘을 비행 실험을 통해 검증한다.

3-D position estimation for eye-in-hand robot vision

  • Jang, Won;Kim, Kyung-Jin;Chung, Myung-Jin;ZeungnamBien
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1988년도 한국자동제어학술회의논문집(국제학술편); 한국전력공사연수원, 서울; 21-22 Oct. 1988
    • /
    • pp.832-836
    • /
    • 1988
  • "Motion Stereo" is quite useful for visual guidance of the robot, but most range finding algorithms of motion stereo have suffered from poor accuracy due to the quantization noise and measurement error. In this paper, 3-D position estimation and refinement scheme is proposed, and its performance is discussed. The main concept of the approach is to consider the entire frame sequence at the same time rather than to consider the sequence as a pair of images. The experiments using real images have been performed under following conditions : hand-held camera, static object. The result demonstrate that the proposed nonlinear least-square estimation scheme provides reliable and fairly accurate 3-D position information for vision-based position control of robot. of robot.

  • PDF