• Title/Summary/Keyword: Vision-based Control

Search Result 689, Processing Time 0.028 seconds

TELE-OPERATIVE SYSTEM FOR BIOPRODUCTION - REMOTE LOCAL IMAGE PROCESSING FOR OBJECT IDENTIFICATION -

  • Kim, S. C.;H. Hwang;J. E. Son;Park, D. Y.
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2000.11b
    • /
    • pp.300-306
    • /
    • 2000
  • This paper introduces a new concept of automation for bio-production with tele-operative system. The proposed system showed practical and feasible way of automation for the volatile bio-production process. Based on the proposition, recognition of the job environment with object identification was performed using computer vision system. A man-machine interactive hybrid decision-making, which utilized a concept of tele-operation was proposed to overcome limitations of the capability of computer in image processing and feature extraction from the complex environment image. Identifying watermelons from the outdoor scene of the cultivation field was selected to realize the proposed concept. Identifying watermelon from the camera image of the outdoor cultivation field is very difficult because of the ambiguity among stems, leaves, shades, and especially fruits covered partly by leaves or stems. The analog signal of the outdoor image was captured and transmitted wireless to the host computer by R.F module. The localized window was formed from the outdoor image by pointing to the touch screen. And then a sequence of algorithms to identify the location and size of the watermelon was performed with the local window image. The effect of the light reflectance of fruits, stems, ground, and leaves were also investigated.

  • PDF

Design of CV-based Realtime Vision Analysis System for Effective AR Vision Control (효율적인 AR 영상 제어를 위한 CV 기반 실시간 영상 분석 시스템 개발)

  • Jung, Sung-Mo;Song, Jae-Gu;Lim, Ji-Hoon;Kim, Seok-Soo
    • Proceedings of the KAIS Fall Conference
    • /
    • 2010.05a
    • /
    • pp.172-175
    • /
    • 2010
  • 최근 스마트폰 기반의 AR(Augmented Reality) 기술이 이슈화됨에 따라 센서 기반의 AR 콘텐츠들이 빠르게 등장하고 있다. 그러나 센서 기반의 AR 기술인 P-AR(Pseudo AR)은 본질적인 AR이 구현되지 못하는 현실의 대안으로 사용되고 있으며, 실제 영상제어를 통한 AR 기술인 V-AR(Vision AR)은 기술개발이 진행 중에 있다. 이러한 예로 ARToolkit 등 AR을 제어할 수 있는 툴들이 개발 진행 중인데, 센서를 통해 이벤트를 발생시킬 수 있는 P-AR 기술에 반해 V-AR은 영상 자체에서 이벤트를 제어해야 하므로 상대적으로 구현이 어렵기 때문이다. V-AR에서 영상을 제어하기 위해서는 기본적으로 영상에서 잡음 제거, 특정객체 인식, 객체 분석 등이 요구된다. 따라서 본 논문에서는 향후 다가올 V-AR 기술에 대비하여 영상에서 배경 제거, 특정객체 인식, 객체 분석 등 효율적인 AR 영상제어를 할 수 있는 CV 기반 실시간 영상 분석 시스템의 프로토타입을 개발하였다.

  • PDF

Implementation of a walking-aid light with machine vision-based pedestrian signal detection (머신비전 기반 보행신호등 검출 기능을 갖는 보행등 구현)

  • Jihun Koo;Juseong Lee;Hongrae Cho;Ho-Myoung An
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.1
    • /
    • pp.31-37
    • /
    • 2024
  • In this study, we propose a machine vision-based pedestrian signal detection algorithm that operates efficiently even in computing resource-constrained environments. This algorithm demonstrates high efficiency within limited resources and is designed to minimize the impact of ambient lighting by sequentially applying HSV color space-based image processing, binarization, morphological operations, labeling, and other steps to address issues such as light glare. Particularly, this algorithm is structured in a relatively simple form to ensure smooth operation within embedded system environments, considering the limitations of computing resources. Consequently, it possesses a structure that operates reliably even in environments with low computing resources. Moreover, the proposed pedestrian signal system not only includes pedestrian signal detection capabilities but also incorporates IoT functionality, allowing wireless integration with a web server. This integration enables users to conveniently monitor and control the status of the signal system through the web server. Additionally, successful implementation has been achieved for effectively controlling 50W LED pedestrian signals. This proposed system aims to provide a rapid and efficient pedestrian signal detection and control system within resource-constrained environments, contemplating its potential applicability in real-world road scenarios. Anticipated contributions include fostering the establishment of safer and more intelligent traffic systems.

Onboard Active Vision Based Hovering Control for Quadcopter in Indoor Environments (실내 환경에서의 능동카메라 기반 쿼더콥터의 호버링 제어)

  • Jin, Tae-Seok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.20 no.1
    • /
    • pp.19-26
    • /
    • 2017
  • In this paper, we describe the design and performance of UAV system toward compact and fully autonomous quadrotors, where they can complete logistics application, rescue work, inspection tour and remote sensing without external assistance systems like ground station computers, high-performance wireless communication devices or motion capture system. we propose high-speed hovering flyght height control method based on state feedback control with image information from active camera and multirate observer because we can get image of the information only every 30ms. Finally, we show the advantages of proposed method by simulations and experiments.

Vision Based Outdoor Terrain Classification for Unmanned Ground Vehicles (무인차량 적용을 위한 영상 기반의 지형 분류 기법)

  • Sung, Gi-Yeul;Kwak, Dong-Min;Lee, Seung-Youn;Lyou, Joon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.4
    • /
    • pp.372-378
    • /
    • 2009
  • For effective mobility control of unmanned ground vehicles in outdoor off-road environments, terrain cover classification technology using passive sensors is vital. This paper presents a novel method far terrain classification based on color and texture information of off-road images. It uses a neural network classifier and wavelet features. We exploit the wavelet mean and energy features extracted from multi-channel wavelet transformed images and also utilize the terrain class spatial coordinates of images to include additional features. By comparing the classification performance according to applied features, the experimental results show that the proposed algorithm has a promising result and potential possibilities for autonomous navigation.

Development of Visual Servo Control System for the Tracking and Grabbing of Moving Object (이동 물체 포착을 위한 비젼 서보 제어 시스템 개발)

  • Choi, G.J.;Cho, W.S.;Ahn, D.S.
    • Journal of Power System Engineering
    • /
    • v.6 no.1
    • /
    • pp.96-101
    • /
    • 2002
  • In this paper, we address the problem of controlling an end-effector to track and grab a moving target using the visual servoing technique. A visual servo mechanism based on the image-based servoing principle, is proposed by using visual feedback to control an end-effector without calibrated robot and camera models. Firstly, we consider the control problem as a nonlinear least squares optimization and update the joint angles through the Taylor Series Expansion. And to track a moving target in real time, the Jacobian estimation scheme(Dynamic Broyden's Method) is used to estimate the combined robot and image Jacobian. Using this algorithm, we can drive the objective function value to a neighborhood of zero. To show the effectiveness of the proposed algorithm, simulation results for a six degree of freedom robot are presented.

  • PDF

A Study on Development of PC Based In-Line Inspection System with Structure Light Laser (구조화 레이저를 이용한 PC 기반 인-라인 검사 시스템 개발에 관한 연구)

  • Shin Chan-Bai;Kim Jin-Dae;Lim Hak-Kyu;Lee Jeh-Won
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.22 no.11 s.176
    • /
    • pp.82-90
    • /
    • 2005
  • Recently, the in-line vision inspection has become the subject of growing research area in the visual control systems and robotic intelligent fields that are required exact three-dimensional pose. The objective of this article is to study the pc based in line visual inspection with the hand-eye structure. This paper suggests three dimensional structured light measuring principle and design method of laser sensor header. The hand-eye laser sensor have been studied for a long time. However, it is very difficult to perform kinematical analysis between laser sensor and robot because the complicated mathematical process are needed for the real environments. In this problem, this paper will propose auto-calibration concept. The detail process of this methodology will be described. A new thinning algorithm and constrained hough transform method is also explained in this paper. Consequently, the developed in-line inspection module demonstrate the successful operation with hole, gap, width or V edge.

Road Surface Marking Detection for Sensor Fusion-based Positioning System (센서 융합 기반 정밀 측위를 위한 노면 표시 검출)

  • Kim, Dongsuk;Jung, Hogi
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.7
    • /
    • pp.107-116
    • /
    • 2014
  • This paper presents camera-based road surface marking detection methods suited to sensor fusion-based positioning system that consists of low-cost GPS (Global Positioning System), INS (Inertial Navigation System), EDM (Extended Digital Map), and vision system. The proposed vision system consists of two parts: lane marking detection and RSM (Road Surface Marking) detection. The lane marking detection provides ROIs (Region of Interest) that are highly likely to contain RSM. The RSM detection generates candidates in the regions and classifies their types. The proposed system focuses on detecting RSM without false detections and performing real time operation. In order to ensure real time operation, the gating varies for lane marking detection and changes detection methods according to the FSM (Finite State Machine) about the driving situation. Also, a single template matching is used to extract features for both lane marking detection and RSM detection, and it is efficiently implemented by horizontal integral image. Further, multiple step verification is performed to minimize false detections.

EXTRACTION OF THE LEAN TISSUE BOUNDARY OF A BEEF CARCASS

  • Lee, C. H.;H. Hwang
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2000.11c
    • /
    • pp.715-721
    • /
    • 2000
  • In this research, rule and neuro net based boundary extraction algorithm was developed. Extracting boundary of the interest, lean tissue, is essential for the quality evaluation of the beef based on color machine vision. Major quality features of the beef are size, marveling state of the lean tissue, color of the fat, and thickness of back fat. To evaluate the beef quality, extracting of loin parts from the sectional image of beef rib is crucial and the first step. Since its boundary is not clear and very difficult to trace, neural network model was developed to isolate loin parts from the entire image input. At the stage of training network, normalized color image data was used. Model reference of boundary was determined by binary feature extraction algorithm using R(red) channel. And 100 sub-images(selected from maximum extended boundary rectangle 11${\times}$11 masks) were used as training data set. Each mask has information on the curvature of boundary. The basic rule in boundary extraction is the adaptation of the known curvature of the boundary. The structured model reference and neural net based boundary extraction algorithm was developed and implemented to the beef image and results were analyzed.

  • PDF

3D shape reconstruction using laser slit beam and image block (레이저슬릿광과 이미지블럭을 이용한 경면물체 형상측정알고리즘)

  • 곽동식;조형석;권동수
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.93-96
    • /
    • 1996
  • Structured laser light is a widely used method for obtaining 3D range information in Machine Vision. However, The structured laser light method is based on assumption that the surface of objects is Lambertian. When the observed surfaces are highly specularly reflective, the laser light can be detected in various parts on the image due to a specular reflection and secondary reflection. This makes wrong range data and the image sensor unusable for the specular objects. To discriminate wrong range data from obtained image data, we have proposed a new algorithm by using the cross section of image block. To show the performance of the proposed method, a series of experiments was, carried out on: the simple geometric shaped objects. The proposed method shows a dramatic improvement of 3D range data better than the typical structured laser light method.

  • PDF