• Title/Summary/Keyword: real-time vision

Search Result 846, Processing Time 0.035 seconds

A Study on Fuzzy Control of Inverted Pendulum Using Real_Time Vision System (실시간 비전 시스템을 이용한 도립진자의 퍼지제어에 관한 연구)

  • Choi, Yong-Sun;Park, Jong-Kyu;Lim, Tae-Woo;Ahn, Tae-Chon
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.2596-2598
    • /
    • 2000
  • In this paper, real-time vision-eyed control system is proposed that combines the information handling capability of computer with the real-time image processing capability of CCD camera, and control effectively real system in the limited environment. The control system is applied to inverted pendulum system, namely, bench marking system. Feasibility of the system is shown in a viewpoint of simulations and experiments.

  • PDF

A vision-based system for long-distance remote monitoring of dynamic displacement: experimental verification on a supertall structure

  • Ni, Yi-Qing;Wang, You-Wu;Liao, Wei-Yang;Chen, Wei-Huan
    • Smart Structures and Systems
    • /
    • v.24 no.6
    • /
    • pp.769-781
    • /
    • 2019
  • Dynamic displacement response of civil structures is an important index for in-construction and in-service structural condition assessment. However, accurately measuring the displacement of large-scale civil structures such as high-rise buildings still remains as a challenging task. In order to cope with this problem, a vision-based system with the use of industrial digital camera and image processing has been developed for long-distance, remote, and real-time monitoring of dynamic displacement of supertall structures. Instead of acquiring image signals, the proposed system traces only the coordinates of the target points, therefore enabling real-time monitoring and display of displacement responses in a relatively high sampling rate. This study addresses the in-situ experimental verification of the developed vision-based system on the Canton Tower of 600 m high. To facilitate the verification, a GPS system is used to calibrate/verify the structural displacement responses measured by the vision-based system. Meanwhile, an accelerometer deployed in the vicinity of the target point also provides frequency-domain information for comparison. Special attention has been given on understanding the influence of the surrounding light on the monitoring results. For this purpose, the experimental tests are conducted in daytime and nighttime through placing the vision-based system outside the tower (in a brilliant environment) and inside the tower (in a dark environment), respectively. The results indicate that the displacement response time histories monitored by the vision-based system not only match well with those acquired by the GPS receiver, but also have higher fidelity and are less noise-corrupted. In addition, the low-order modal frequencies of the building identified with use of the data obtained from the vision-based system are all in good agreement with those obtained from the accelerometer, the GPS receiver and an elaborate finite element model. Especially, the vision-based system placed at the bottom of the enclosed elevator shaft offers better monitoring data compared with the system placed outside the tower. Based on a wavelet filtering technique, the displacement response time histories obtained by the vision-based system are easily decomposed into two parts: a quasi-static ingredient primarily resulting from temperature variation and a dynamic component mainly caused by fluctuating wind load.

Manufacturing process monitoring and Rescheduling using RFID and Computer vision system (전자태그와 컴퓨터 비전 시스템을 이용한 생산 공정 감시와 재일정계획)

  • Kong J.H.;Han M.C.;Park J.W.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2005.06a
    • /
    • pp.153-156
    • /
    • 2005
  • Real-time monitoring and controlling manufacturing process is important because of the unexpected events. When unexpected event like mechanical trouble occurs, prior plan becomes unacceptable and a new schedule must be generated though manufacturing schedule is already decided for order. Regenerating the whole schedule, however, spends much time and cost. Thus automated system which monitors and controls manufacturing process is required. In this paper, we present a system which uses radio-frequency identification and computer vision system. The system collect real-time information about manufacturing conditions and generates new schedule quickly with those information.

  • PDF

Development of multi-object image processing algorithm in a image plane (한 이미지 평면에 있는 다물체 화상처리 기법 개발)

  • 장완식;윤현권;김재확
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.555-555
    • /
    • 2000
  • This study is concentrated on the development of hight speed multi-object image processing algorithm, and based on these a1gorithm, vision control scheme is developed for the robot's position control in real time. Recently, the use of vision system is rapidly increasing in robot's position centre. To apply vision system in robot's position control, it is necessary to transform the physical coordinate of object into the image information acquired by CCD camera, which is called image processing. Thus, to control the robot's point position in real time, we have to know the center point of object in image plane. Particularly, in case of rigid body, the center points of multi-object must be calculated in a image plane at the same time. To solve these problems, the algorithm of multi-object for rigid body control is developed.

  • PDF

A study on real time inspection of OLED protective film using edge detecting algorithm (Edge Detecting Algorithm을 이용한 OLED 보호 필름의 Real Time Inspection에 대한 연구)

  • Han, Joo-Seok;Han, Bong-Seok;Han, Yu-Jin;Choi, Doo-Sun;Kim, Tae-Min;Ko, Kang-Ho;Park, Jung-Rae;Lim, Dong-Wook
    • Design & Manufacturing
    • /
    • v.14 no.2
    • /
    • pp.14-20
    • /
    • 2020
  • In OLED panel production process, it is necessary to cut a part of protective film as a preprocess for lighting inspection. The current method is to recognize only the fiducial mark of the cut-out panel. Bare Glass Cutting does not compensate for machining cumulative tolerances. Even though process defects still occur, it is necessary to develop technology to solve this problem because only the Align Mark of the panel that has already been cut is used as the reference point for alignment. There is a lot of defective lighting during panel lighting test because the correct protective film is not cut on the panel power and signal application pad position. In laser cutting process to remove the polarizing film / protective film / TSP film of OLED panel, laser processing is not performed immediately after the panel alignment based on the alignment mark only. Therefore, in this paper, we performed real time inspection which minimizes the mechanism tolerance by correcting the laser cutting path of the protective film in real time using Machine Vision. We have studied calibration algorithm of Vision Software coordinate system and real image coordinate system to minimize inspection resolution and position detection error and edge detection algorithm to accurately measure edge of panel.

Vision and Lidar Sensor Fusion for VRU Classification and Tracking in the Urban Environment (카메라-라이다 센서 융합을 통한 VRU 분류 및 추적 알고리즘 개발)

  • Kim, Yujin;Lee, Hojun;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.7-13
    • /
    • 2021
  • This paper presents an vulnerable road user (VRU) classification and tracking algorithm using vision and LiDAR sensor fusion method for urban autonomous driving. The classification and tracking for vulnerable road users such as pedestrian, bicycle, and motorcycle are essential for autonomous driving in complex urban environments. In this paper, a real-time object image detection algorithm called Yolo and object tracking algorithm from LiDAR point cloud are fused in the high level. The proposed algorithm consists of four parts. First, the object bounding boxes on the pixel coordinate, which is obtained from YOLO, are transformed into the local coordinate of subject vehicle using the homography matrix. Second, a LiDAR point cloud is clustered based on Euclidean distance and the clusters are associated using GNN. In addition, the states of clusters including position, heading angle, velocity and acceleration information are estimated using geometric model free approach (GMFA) in real-time. Finally, the each LiDAR track is matched with a vision track using angle information of transformed vision track and assigned a classification id. The proposed fusion algorithm is evaluated via real vehicle test in the urban environment.

Development of a SAD Correlater for Real-time Stereo Vision (실시간 스테레오 비젼 시스템을 위한 SAD 정합연산기 설계)

  • Yi, Jong-Su;Yang, Seung-Gu;Kim, Jun-Seong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.1
    • /
    • pp.55-61
    • /
    • 2008
  • A real-time three-dimensional vision is a passive system, which would support various applications including collision avoidance, home networks. It is a good alternative of active systems, which are subject to interference in noisy environments. In this paper, we designed a SAD correlator with respect to resource usage for a real-time three-dimensional vision system. Regular structures, linear data flow and abundant parallelism make the correlation algorithm a good candidate for a reconfigurable hardware. We implemented two versions of SAD correlator in HDL and synthesized them to determine resource requirements and performance. From the experiment we show that the SAD correlator fits into reconfigurable hardware in marginal cost and can handle about 30 frames/sec with $640{\times}480$ images.

Design of Real-time Auto-Focusing System (실시간 자동 초점 조절 시스템의 설계)

  • Kim, Nam-Jin;Seo, Sam-Jun;Seo, Ho-Joon;Park, Gwi-Tae
    • Proceedings of the KIEE Conference
    • /
    • 1997.11a
    • /
    • pp.116-118
    • /
    • 1997
  • The moving average filter in this paper, which has robust performance to the noise and can be easily implementable in hardware, is modified in view of real-time processing of the focus value. The simple hardware configurations are implemented to calculate the focus value in real-time. The stable controller of focus lens actuated by motors are designed. The hardware which are composed of EPLD, cheap vision chips, and CPU etc. are designed to perform the real-time calculation of focus value.

  • PDF

A Task Scheduling Strategy in a Multi-core Processor for Visual Object Tracking Systems (시각물체 추적 시스템을 위한 멀티코어 프로세서 기반 태스크 스케줄링 방법)

  • Lee, Minchae;Jang, Chulhoon;Sunwoo, Myoungho
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.24 no.2
    • /
    • pp.127-136
    • /
    • 2016
  • The camera based object detection systems should satisfy the recognition performance as well as real-time constraints. Particularly, in safety-critical systems such as Autonomous Emergency Braking (AEB), the real-time constraints significantly affects the system performance. Recently, multi-core processors and system-on-chip technologies are widely used to accelerate the object detection algorithm by distributing computational loads. However, due to the advanced hardware, the complexity of system architecture is increased even though additional hardwares improve the real-time performance. The increased complexity also cause difficulty in migration of existing algorithms and development of new algorithms. In this paper, to improve real-time performance and design complexity, a task scheduling strategy is proposed for visual object tracking systems. The real-time performance of the vision algorithm is increased by applying pipelining to task scheduling in a multi-core processor. Finally, the proposed task scheduling algorithm is applied to crosswalk detection and tracking system to prove the effectiveness of the proposed strategy.

Design and Inplementation of S/W for a Davinci-based Smart Camera (다빈치 기반 스마트 카메라 S/W 설계 및 구현)

  • Yu, Hui-Jse;Chung, Sun-Tae;Jung, Souhwan
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2008.05a
    • /
    • pp.116-120
    • /
    • 2008
  • Smart Camera provides intelligent vision functionalities which can interpret captured video, extract context-aware information and execute a necessary action in real-timeliness in addition to the functionality of network cameras which transmit the compressed acquired videos through networks. Intelligent vision algorithms demand tremendous computations so that real-time processing of computation of intelligent vision algorithms as well as compression and transmission of videos simultaneously is too much burden for a single CPU. Davinci processor of Texas Instruments is a popular ASSP(Application Specific Standard Product) which has dual core architecture of ARM core and DSP core and provides various I/O interfaces as well as networking interface and video acquiring interface necessary for developing digital video embedded applications. In this paper, we report the results of designing and implementing S/W for Davinci-based smart camera. We implement a face detection as an example of vision application and verify the implementation works well. In the future, for the development of a smart camera with more broad and real-time vision functionalities, it is necessary to study about more efficient vision application S/W architecture and optimization of vision algorithms on DSP core of Davichi processor.

  • PDF