• Title/Summary/Keyword: Vision-based Control

Search Result 687, Processing Time 0.032 seconds

Development of a Ubiquitous Vision System for Location-awareness of Multiple Targets by a Matching Technique for the Identity of a Target;a New Approach

  • Kim, Chi-Ho;You, Bum-Jae;Kim, Hag-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.68-73
    • /
    • 2005
  • Various techniques have been proposed for detection and tracking of targets in order to develop a real-world computer vision system, e.g., visual surveillance systems, intelligent transport systems (ITSs), and so forth. Especially, the idea of distributed vision system is required to realize these techniques in a wide-spread area. In this paper, we develop a ubiquitous vision system for location-awareness of multiple targets. Here, each vision sensor that the system is composed of can perform exact segmentation for a target by color and motion information, and visual tracking for multiple targets in real-time. We construct the ubiquitous vision system as the multiagent system by regarding each vision sensor as the agent (the vision agent). Therefore, we solve matching problem for the identity of a target as handover by protocol-based approach. We propose the identified contract net (ICN) protocol for the approach. The ICN protocol not only is independent of the number of vision agents but also doesn't need calibration between vision agents. Therefore, the ICN protocol raises speed, scalability, and modularity of the system. We adapt the ICN protocol in our ubiquitous vision system that we construct in order to make an experiment. Our ubiquitous vision system shows us reliable results and the ICN protocol is successfully operated through several experiments.

  • PDF

Improvement effect of Functional Myopia by Using of Vision Training Device(OTUS) (Vision Training Device(OTUS)적용에 따른 기능성 근시의 개선 효과)

  • Park, Sung-Yong;Yoon, Yeong-Dae;Kim, Deok-Hun;Lee, Dong-Hee
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.2
    • /
    • pp.147-154
    • /
    • 2020
  • This study is about the development of ICT-based wearable devices for vision recovery that can cause functional myopia improvement through accommodation training. Vision Training Device(OTUS) is a head mount type wearable device, which naturally stimulates the contraction and relaxation of the ciliary muscles of eye. Users can conduct customized vision training based on personal vision information stored through the device. In the experiment, the effects of improvement of the symptoms by the accommodation training were compared and analysed for the two groups (16 comparative group and 16 accommodation training group) after causing functional myopia. The result showed the functional myopia improved average 0.44D±0.35 (p<0.05) at the accommodation training group compared to the comparative group. This study proved the effectiveness of vision training device(OTUS) on functional myopia, but further clinical trials are judged necessary to prove the possibility of long-term control of the functional myopia.

Supervisory Control for Multi-Processor-Based Automatic Assembly System (다중프로세서 방식의 자동조립시스템을 위한 관리제어)

  • ;;;Zeungnam Bien
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.39 no.8
    • /
    • pp.888-897
    • /
    • 1990
  • In this paper, a multi-processor-based supervisory control for automatic assembly system is presented. The proposed supervisory control is organized in terms of C-language and with structured and easily expandable characteristics. Also the controller is designed to possess diagnostic capability including self-diagnosis of processor module. The developed supervisory control has been shown to be very useful via a high speed automatic assembly system with vision capability.

  • PDF

Vision-based Autonomous Landing System of an Unmanned Aerial Vehicle on a Moving Vehicle (무인 항공기의 이동체 상부로의 영상 기반 자동 착륙 시스템)

  • Jung, Sungwook;Koo, Jungmo;Jung, Kwangyik;Kim, Hyungjin;Myung, Hyun
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.4
    • /
    • pp.262-269
    • /
    • 2016
  • Flight of an autonomous unmanned aerial vehicle (UAV) generally consists of four steps; take-off, ascent, descent, and finally landing. Among them, autonomous landing is a challenging task due to high risks and reliability problem. In case the landing site where the UAV is supposed to land is moving or oscillating, the situation becomes more unpredictable and it is far more difficult than landing on a stationary site. For these reasons, the accurate and precise control is required for an autonomous landing system of a UAV on top of a moving vehicle which is rolling or oscillating while moving. In this paper, a vision-only based landing algorithm using dynamic gimbal control is proposed. The conventional camera systems which are applied to the previous studies are fixed as downward facing or forward facing. The main disadvantage of these system is a narrow field of view (FOV). By controlling the gimbal to track the target dynamically, this problem can be ameliorated. Furthermore, the system helps the UAV follow the target faster than using only a fixed camera. With the artificial tag on a landing pad, the relative position and orientation of the UAV are acquired, and those estimated poses are used for gimbal control and UAV control for safe and stable landing on a moving vehicle. The outdoor experimental results show that this vision-based algorithm performs fairly well and can be applied to real situations.

Event recognition of entering and exiting (출입 이벤트 인식)

  • Cui, Yaohuan;Lee, Chang-Woo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2008.06a
    • /
    • pp.199-204
    • /
    • 2008
  • Visual surveillance is an active topic recently in Computer Vision. Event detection and recognition is one important and useful application of visual surveillance system. In this paper, we propose a new method to recognize the entering and exiting events based on the human's movement feature and the door's state. Without sensors, the proposed approach is based on novel and simple vision method as a combination of edge detection, motion history image and geometrical characteristic of the human shape. The proposed method includes several applications such as access control in visual surveillance and computer vision fields.

  • PDF

A Study on the Point Placement Task of Robot System Based on the Vision System (비젼시스템을 이용한 로봇시스템의 점배치실험에 관한 연구)

  • Jang, Wan-Shik;You, Chang-gyou
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.13 no.8
    • /
    • pp.175-183
    • /
    • 1996
  • This paper presents three-dimensional robot task using the vision control method. A minimum of two cameras is required to place points on end dffectors of n degree-of-freedom manipulators relative to other bodies. This is accomplished using a sequential estimation scheme that permits placement of these points in each of the two-dimensional image planes of monitoring cameras. Estimation model is developed based on a model that generalizes known three-axis manipulator kinematics to accommodate unknown relative camera position and orientation, etc. This model uses six uncertainty-of-view parameters estimated by the iteration method.

  • PDF

Depth Estimation Through the Projection of Rotating Mirror Image unto Mono-camera (회전 평면경 영상의 단일 카메라 투영에 의한 거리 측정)

  • Kim, Hyeong-Seok;Song, Jae-Hong;Han, Hu-Seok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.9
    • /
    • pp.790-797
    • /
    • 2001
  • A simple computer vision technology to measure the middle-ranged depth with a mono camera and a plain mirror is proposed. The proposed system is structured with the rotating mirror in front of the fixed mono camera. In contrast to the previous stereo vision system in which the disparity of the closer object is larger than that of the distant object, the pixel movement caused by the rotating mirror is bigger for the pixels of the distant object in the proposed system. Being inspired by such distinguished feature in the proposed system, the principle of the depth measurement based on the relation of the pixel movement and the distance of object is investigated. Also, the factors to influence the precision of the measurement are analysed. The benefits of the proposed system are low price and less chance of occlusion. The robustness for practical usage is an additional benefit of the proposed vision system.

  • PDF

Target Tracking Control of a Quadrotor UAV using Vision Sensor (비전 센서를 이용한 쿼드로터형 무인비행체의 목표 추적 제어)

  • Yoo, Min-Goo;Hong, Sung-Kyung
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.40 no.2
    • /
    • pp.118-128
    • /
    • 2012
  • The goal of this paper is to design the target tracking controller for a quadrotor micro UAV using a vision sensor. First of all, the mathematical model of the quadrotor was estimated through the Prediction Error Method(PEM) using experimental input/output flight data, and then the estimated model was validated via the comparison with new experimental flight data. Next, the target tracking controller was designed using LQR(Linear Quadratic Regulator) method based on the estimated model. The relative distance between an object and the quadrotor was obtained by a vision sensor, and the altitude was obtained by a ultra sonic sensor. Finally, the performance of the designed target tracking controller was evaluated through flight tests.

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

Study on the Localization Improvement of the Dead Reckoning using the INS Calibrated by the Fusion Sensor Network Information (융합 센서 네트워크 정보로 보정된 관성항법센서를 이용한 추측항법의 위치추정 향상에 관한 연구)

  • Choi, Jae-Young;Kim, Sung-Gaun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.8
    • /
    • pp.744-749
    • /
    • 2012
  • In this paper, we suggest that how to improve an accuracy of mobile robot's localization by using the sensor network information which fuses the machine vision camera, encoder and IMU sensor. The heading value of IMU sensor is measured using terrestrial magnetism sensor which is based on magnetic field. However, this sensor is constantly affected by its surrounding environment. So, we isolated template of ceiling using vision camera to increase the sensor's accuracy when we use IMU sensor; we measured the angles by pattern matching algorithm; and to calibrate IMU sensor, we compared the obtained values with IMU sensor values and the offset value. The values that were used to obtain information on the robot's position which were of Encoder, IMU sensor, angle sensor of vision camera are transferred to the Host PC by wireless network. Then, the Host PC estimates the location of robot using all these values. As a result, we were able to get more accurate information on estimated positions than when using IMU sensor calibration solely.