• 제목/요약/키워드: vision-based control

검색결과 683건 처리시간 0.042초

수중 구조물 진단용 원격 조종 로봇의 자세 제어를 위한 비전 기반 센서 융합 (Vision-based Sensor Fusion of a Remotely Operated Vehicle for Underwater Structure Diagnostication)

  • 이재민;김곤우
    • 제어로봇시스템학회논문지
    • /
    • 제21권4호
    • /
    • pp.349-355
    • /
    • 2015
  • Underwater robots generally show better performances for tasks than humans under certain underwater constraints such as. high pressure, limited light, etc. To properly diagnose in an underwater environment using remotely operated underwater vehicles, it is important to keep autonomously its own position and orientation in order to avoid additional control efforts. In this paper, we propose an efficient method to assist in the operation for the various disturbances of a remotely operated vehicle for the diagnosis of underwater structures. The conventional AHRS-based bearing estimation system did not work well due to incorrect measurements caused by the hard-iron effect when the robot is approaching a ferromagnetic structure. To overcome this drawback, we propose a sensor fusion algorithm with the camera and AHRS for estimating the pose of the ROV. However, the image information in the underwater environment is often unreliable and blurred by turbidity or suspended solids. Thus, we suggest an efficient method for fusing the vision sensor and the AHRS with a criterion which is the amount of blur in the image. To evaluate the amount of blur, we adopt two methods: one is the quantification of high frequency components using the power spectrum density analysis of 2D discrete Fourier transformed image, and the other is identifying the blur parameter based on cepstrum analysis. We evaluate the performance of the robustness of the visual odometry and blur estimation methods according to the change of light and distance. We verify that the blur estimation method based on cepstrum analysis shows a better performance through the experiments.

Indoor Surveillance Camera based Human Centric Lighting Control for Smart Building Lighting Management

  • Yoon, Sung Hoon;Lee, Kil Soo;Cha, Jae Sang;Mariappan, Vinayagam;Lee, Min Woo;Woo, Deok Gun;Kim, Jeong Uk
    • International Journal of Advanced Culture Technology
    • /
    • 제8권1호
    • /
    • pp.207-212
    • /
    • 2020
  • The human centric lighting (HCL) control is a major focus point of the smart lighting system design to provide energy efficient and people mood rhythmic motivation lighting in smart buildings. This paper proposes the HCL control using indoor surveillance camera to improve the human motivation and well-beings in the indoor environments like residential and industrial buildings. In this proposed approach, the indoor surveillance camera video streams are used to predict the day lights and occupancy, occupancy specific emotional features predictions using the advanced computer vision techniques, and this human centric features are transmitted to the smart building light management system. The smart building light management system connected with internet of things (IoT) featured lighting devices and controls the light illumination of the objective human specific lighting devices. The proposed concept experimental model implemented using RGB LED lighting devices connected with IoT features open-source controller in the network along with networked video surveillance solution. The experiment results are verified with custom made automatic lighting control demon application integrated with OpenCV framework based computer vision methods to predict the human centric features and based on the estimated features the lighting illumination level and colors are controlled automatically. The experiment results received from the demon system are analyzed and used for the real-time development of a lighting system control strategy.

A Robotic Vision System for Turbine Blade Cooling Hole Detection

  • Wang, Jianjun;Tang, Qing;Gan, Zhongxue
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.237-240
    • /
    • 2003
  • Gas turbines are extensively used in flight propulsion, electrical power generation, and other industrial applications. During its life span, a turbine blade is taken out periodically for repair and maintenance. This includes re-coating the blade surface and re-drilling the cooling holes/channels. A successful laser re-drilling requires the measurement of a hole within the accuracy of ${\pm}0.15mm$ in position and ${\pm}3^{\circ}$ in orientation. Detection of gas turbine blade/vane cooling hole position and orientation thus becomes a very important step for the vane/blade repair process. The industry is in urgent need of an automated system to fulfill the above task. This paper proposes approaches and algorithms to detect the cooling hole position and orientation by using a vision system mounted on a robot arm. The channel orientation is determined based on the alignment of the vision system with the channel axis. The opening position of the channel is the intersection between the channel axis and the surface around the channel opening. Experimental results have indicated that the concept of cooling hole identification is feasible. It has been shown that the reproducible detection of cooling channel position is with +/- 0.15mm accuracy and cooling channel orientation is with +/$-\;3^{\circ}$ with the current test conditions. Average processing time to search and identify channel position and orientation is less than 1 minute.

  • PDF

A Knowledge-Based Machine Vision System for Automated Industrial Web Inspection

  • Cho, Tai-Hoon;Jung, Young-Kee;Cho, Hyun-Chan
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제1권1호
    • /
    • pp.13-23
    • /
    • 2001
  • Most current machine vision systems for industrial inspection were developed with one specific task in mind. Hence, these systems are inflexible in the sense that they cannot easily be adapted to other applications. In this paper, a general vision system framework has been developed that can be easily adapted to a variety of industrial web inspection problems. The objective of this system is to automatically locate and identify \\\"defects\\\" on the surface of the material being inspected. This framework is designed to be robust, to be flexible, and to be as computationally simple as possible. To assure robustness this framework employs a combined strategy of top-down and bottom-up control, hierarchical defect models, and uncertain reasoning methods. To make this framework flexible, a modular Blackboard framework is employed. To minimize computational complexity the system incorporates a simple multi-thresholding segmentation scheme, a fuzzy logic focus of attention mechanism for scene analysis operations, and a partitioning if knowledge that allows concurrent parallel processing during recognition.cognition.

  • PDF

Study of Intelligent Vision Sensor for the Robotic Laser Welding

  • Kim, Chang-Hyun;Choi, Tae-Yong;Lee, Ju-Jang;Suh, Jeong;Park, Kyoung-Taik;Kang, Hee-Shin
    • 한국산업융합학회 논문집
    • /
    • 제22권4호
    • /
    • pp.447-457
    • /
    • 2019
  • The intelligent sensory system is required to ensure the accurate welding performance. This paper describes the development of an intelligent vision sensor for the robotic laser welding. The sensor system includes a PC based vision camera and a stripe-type laser diode. A set of robust image processing algorithms are implemented. The laser-stripe sensor can measure the profile of the welding object and obtain the seam line. Moreover, the working distance of the sensor can be changed and other configuration is adjusted accordingly. The robot, the seam tracking system, and CW Nd:YAG laser are used for the laser welding robot system. The simple and efficient control scheme of the whole system is also presented. The profile measurement and the seam tracking experiments were carried out to validate the operation of the system.

무인 항공기의 목표물 추적을 위한 영상 기반 목표물 위치 추정 (Vision Based Estimation of 3-D Position of Target for Target Following Guidance/Control of UAV)

  • 김종훈;이대우;조겸래;조선영;김정호;한동인
    • 제어로봇시스템학회논문지
    • /
    • 제14권12호
    • /
    • pp.1205-1211
    • /
    • 2008
  • This paper describes methods to estimate 3-D position of target with respect to reference frame through monocular image from unmanned aerial vehicle (UAV). 3-D position of target is used as information for surveillance, recognition and attack. In this paper. 3-D position of target is estimated to make guidance and control law, which can follow target, user interested. It is necessary that position of target is measured in image to solve 3-D position of target. In this paper, kalman filter is used to track and output position of target in image. Estimation of target's 3-D position is possible using result of image tracking and information of UAV and camera. To estimate this, two algorithms are used. One is methode from arithmetic derivation of dynamics between UAV, carmer, and target. The other is LPV (Linear Parametric Varying). These methods have been run on simulation, and compared in this paper.

LATERAL CONTROL OF AUTONOMOUS VEHICLE USING SEVENBERG-MARQUARDT NEURAL NETWORK ALGORITHM

  • Kim, Y.-B.;Lee, K.-B.;Kim, Y.-J.;Ahn, O.-S.
    • International Journal of Automotive Technology
    • /
    • 제3권2호
    • /
    • pp.71-78
    • /
    • 2002
  • A new control method far vision-based autonomous vehicle is proposed to determine navigation direction by analyzing lane information from a camera and to navigate a vehicle. In this paper, characteristic featured data points are extracted from lane images using a lane recognition algorithm. Then the vehicle is controlled using new Levenberg-Marquardt neural network algorithm. To verify the usefulness of the algorithm, another algorithm, which utilizes the geometric relation of a camera and vehicle, is introduced. The second one involves transformation from an image coordinate to a vehicle coordinate, then steering is determined from Ackermann angle. The steering scheme using Ackermann angle is heavily depends on the correct geometric data of a vehicle and a camera. Meanwhile, the proposed neural network algorithm does not need geometric relations and it depends on the driving style of human driver. The proposed method is superior than other referenced neural network algorithms such as conjugate gradient method or gradient decent one in autonomous lateral control .