• Title/Summary/Keyword: Active Vision

Search Result 278, Processing Time 0.032 seconds

Development of a Lane Departure Avoidance System using Vision Sensor and Active Steering Control (비전 센서 및 능동 조향 제어를 이용한 차선 이탈 방지 시스템 개발)

  • 허건수;박범찬;홍대건
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.11 no.6
    • /
    • pp.222-228
    • /
    • 2003
  • Lane departure avoidance system is one of the key technologies for the future active-safety passenger cars. The lane departure avoidance system is composed of two subsystems; lane sensing algorithm and active-steering controller. In this paper, the road image is obtained by vision sensor and the lane parameters are estimated using image processing and Kalman Filter technique. The active-steering controller is designed to prevent the lane departure. The developed active-steering controller can be realized by steer-by-wire actuator. The lane-sensing algorithm and active-steering controller are implemented into the steering HILS(Hardware-In-the-Loop Simulation) and their performance is evaluated with a human driver in the loop.

An active stereo camera modeling (동적 스테레오 카메라 모델링)

  • Do, Kyoung-Mihn;Lee, Kwae-Hi
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.3 no.3
    • /
    • pp.297-304
    • /
    • 1997
  • In stereo vision, camera modeling is very important because the accuracy of the three dimensional locations depends considerably on it. In the existing stereo camera models, two camera planes are located in the same plane or on the optical axis. These camera models cannot be used in the active vision system where it is necessary to obtain two stereo images simultaneously. In this paper, we propose four kinds of stereo camera models for active stereo vision system where focal lengths of the two cameras are different and each camera is able to rotate independently. A single closed form solution is obtained for all models. The influence of the stereo camera model to the field of view, occlusion, and search area used for matching is shown in this paper. And errors due to inaccurate focal length are analyzed and simulation results are shown. It is expected that the three dimensional locations of objects are determined in real time by applying proposed stereo camera models to the active stereo vision system, such as a mobile robot.

  • PDF

Implementation of a Stereo Vision Using Saliency Map Method

  • Choi, Hyeung-Sik;Kim, Hwan-Sung;Shin, Hee-Young;Lee, Min-Ho
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.36 no.5
    • /
    • pp.674-682
    • /
    • 2012
  • A new intelligent stereo vision sensor system was studied for the motion and depth control of unmanned vehicles. A new bottom-up saliency map model for the human-like active stereo vision system based on biological visual process was developed to select a target object. If the left and right cameras successfully find the same target object, the implemented active vision system with two cameras focuses on a landmark and can detect the depth and the direction information. By using this information, the unmanned vehicle can approach to the target autonomously. A number of tests for the proposed bottom-up saliency map were performed, and their results were presented.

Study on the Target Tracking of a Mobile Robot Using Active Stereo-Vision System (능동 스테레오 비젼을 시스템을 이용한 자율이동로봇의 목표물 추적에 관한 연구)

  • 이희명;이수희;이병룡;양순용;안경관
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2003.06a
    • /
    • pp.915-919
    • /
    • 2003
  • This paper presents a fuzzy-motion-control based tracking algorithm of mobile robots, which uses the geometrical information derived from the active stereo-vision system mounted on the mobile robot. The active stereo-vision system consists of two color cameras that rotates in two angular dimensions. With the stereo-vision system, the center position and depth information of the target object can be calculated. The proposed fuzzy motion controller is used to calculate the tracking velocity and angular position of the mobile robot, which makes the mobile robot keep following the object with a constant distance and orientation.

  • PDF

A binocular robot vision system with quadrangle recognition

  • Yabuta, Yoshito;Mizumoto, Hiroshi;Arii, Shiro
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.80-83
    • /
    • 2005
  • A binocular robot vision system having an autonomously moving active viewpoint is proposed. By using this active viewpoint, the system constructs a correspondence between the images of a feature points on the right and left retinas and calculates the spatial coordinates of the feature points. The system incorporates a function of detecting straight lines in an image. To detect lines the system uses Hough transform. The system searches a region surrounded by 4 straight lines. Then the system recognizes the region as a quadrangle. The system constructs a correspondence between the quadrangles in the right and left images. By the use of the result of the constructed correspondence, the system calculates the spatial coordinates of an object. An experiment shows the effect of the line detection using Hough transform, the recognition of the surface of the object and the calculation of the spatial coordinates of the object.

  • PDF

Real-time Omni-directional Distance Measurement with Active Panoramic Vision

  • Yi, Soo-Yeong;Choi, Byoung-Wook;Ahuja, Narendra
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.2
    • /
    • pp.184-191
    • /
    • 2007
  • Autonomous navigation of mobile robot requires a ranging system for measurement of distance to environmental objects. It is obvious that the wider and the faster distance measurement gives a mobile robot more freedom in trajectory planning and control. The active omni-directional ranging system proposed in this paper is capable of obtaining the distance for all 3600 directions in real-time because of the omni-directional mirror and the structured light. Distance computation including the sensitivity analysis and the experiments on the omni-directional ranging are presented to verify the effectiveness of the proposed system.

Localization of Mobile Robot Using Active Omni-directional Ranging System (능동 전방향 거리 측정 시스템을 이용한 이동로봇의 위치 추정)

  • Ryu, Ji-Hyung;Kim, Jin-Won;Yi, Soo-Yeong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.5
    • /
    • pp.483-488
    • /
    • 2008
  • An active omni-directional raging system using an omni-directional vision with structured light has many advantages compared to the conventional ranging systems: robustness against external illumination noise because of the laser structured light and computational efficiency because of one shot image containing $360^{\circ}$ environment information from the omni-directional vision. The omni-directional range data represents a local distance map at a certain position in the workspace. In this paper, we propose a matching algorithm for the local distance map with the given global map database, thereby to localize a mobile robot in the global workspace. Since the global map database consists of line segments representing edges of environment object in general, the matching algorithm is based on relative position and orientation of line segments in the local map and the global map. The effectiveness of the proposed omni-directional ranging system and the matching are verified through experiments.

MPC-based Active Steering Control using Multi-rate Kalman Filter for Autonomous Vehicle Systems with Vision (비젼 기반 자율주행을 위한 다중비율 예측기 설계와 모델예측 기반 능동조향 제어)

  • Kim, Bo-Ah;Lee, Young-Ok;Lee, Seung-Hi;Chung, Chung-Choo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.61 no.5
    • /
    • pp.735-743
    • /
    • 2012
  • In this paper, we present model predictive control (MPC) applied to lane keeping system (LKS) based on a vision module. Due to a slow sampling rate of the vision system, the conventional LKS using single rate control may result in uncomfortable steering control rate in a high vehicle speed. By applying MPC using multi-rate Kalman filter to active steering control, the proposed MPC-based active steering control system prevents undesirable saturated steering control command. The effectiveness of the MPC is validated by simulations for the LKS equipped with a camera module having a slow sampling rate on the curved lane with the minimum radius of 250[m] at a vehicle speed of 30[m/s].

Development of Non-Contacting Automatic Inspection Technology of Precise Parts (정밀부품의 비접촉 자동검사기술 개발)

  • Lee, Woo-Sung;Han, Sung-Hyun
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.16 no.6
    • /
    • pp.110-116
    • /
    • 2007
  • This paper presents a new technique to implement the real-time recognition for shapes and model number of parts based on an active vision approach. The main focus of this paper is to apply a technique of 3D object recognition for non-contacting inspection of the shape and the external form state of precision parts based on the pattern recognition. In the field of computer vision, there have been many kinds of object recognition approaches. And most of these approaches focus on a method of recognition using a given input image (passive vision). It is, however, hard to recognize an object from model objects that have similar aspects each other. Recently, it has been perceived that an active vision is one of hopeful approaches to realize a robust object recognition system. The performance is illustrated by experiment for several parts and models.

Steering Gaze of a Camera in an Active Vision System: Fusion Theme of Computer Vision and Control (능동적인 비전 시스템에서 카메라의 시선 조정: 컴퓨터 비전과 제어의 융합 테마)

  • 한영모
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.41 no.4
    • /
    • pp.39-43
    • /
    • 2004
  • A typical theme of active vision systems is gaze-fixing of a camera. Here gaze-fixing of a camera means by steering orientation of a camera so that a given point on the object is always at the center of the image. For this we need to combine a function to analyze image data and a function to control orientation of a camera. This paper presents an algorithm for gaze-fixing of a camera where image analysis and orientation control are designed in a frame. At this time, for avoiding difficulties in implementing and aiming for real-time applications we design the algorithm to be a simple closed-form without using my information related to calibration of the camera or structure estimation.