• Title/Summary/Keyword: Vision Processing

Search Result 1,536, Processing Time 0.031 seconds

Vision-based technique for bolt-loosening detection in wind turbine tower

  • Park, Jae-Hyung;Huynh, Thanh-Canh;Choi, Sang-Hoon;Kim, Jeong-Tae
    • Wind and Structures
    • /
    • v.21 no.6
    • /
    • pp.709-726
    • /
    • 2015
  • In this study, a novel vision-based bolt-loosening monitoring technique is proposed for bolted joints connecting tubular steel segments of the wind turbine tower (WTT) structure. Firstly, a bolt-loosening detection algorithm based on image processing techniques is developed. The algorithm consists of five steps: image acquisition, segmentation of each nut, line detection of each nut, nut angle estimation, and bolt-loosening detection. Secondly, experimental tests are conducted on a lab-scale bolted joint model under various bolt-loosening scenarios. The bolted joint model, which is consisted of a ring flange and 32 sets of bolt and nut, is used for simulating the real bolted joint connecting steel tower segments in the WTT. Finally, the feasibility of the proposed vision-based technique is evaluated by bolt-loosening monitoring in the lab-scale bolted joint model.

Visual Target Tracking and Relative Navigation for Unmanned Aerial Vehicles in a GPS-Denied Environment

  • Kim, Youngjoo;Jung, Wooyoung;Bang, Hyochoong
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.15 no.3
    • /
    • pp.258-266
    • /
    • 2014
  • We present a system for the real-time visual relative navigation of a fixed-wing unmanned aerial vehicle in a GPS-denied environment. An extended Kalman filter is used to construct a vision-aided navigation system by fusing the image processing results with barometer and inertial sensor measurements. Using a mean-shift object tracking algorithm, an onboard vision system provides pixel measurements to the navigation filter. The filter is slightly modified to deal with delayed measurements from the vision system. The image processing algorithm and the navigation filter are verified by flight tests. The results show that the proposed aerial system is able to maintain circling around a target without using GPS data.

The Multipass Joint Tracking System by Vision Sensor (비전센서를 이용한 다층 용접선 추적 시스템)

  • Lee, Jeong-Ick;Koh, Byung-Kab
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.16 no.5
    • /
    • pp.14-23
    • /
    • 2007
  • Welding fabrication invariantly involves three district sequential steps: preparation, actual process execution and post-weld inspection. One of the major problems in automating these steps and developing autonomous welding system is the lack of proper sensing strategies. Conventionally, machine vision is used in robotic arc welding only for the correction of pre-taught welding paths in single pass. However, in this paper, multipass tracking more than single pass tracking is performed by conventional seam tracking algorithm and developed one. And tracking performances of two algorithm are compared in multipass tracking. As the result, tracking performance in multi-pass welding shows superior conventional seam tracking algorithm to developed one.

Development of Auto Tracking Vision Control System for Video Conference (화상회의를 위한 자동추적 카메라 제어시스템 개발)

  • Han, Byung-Jo;Hwang, Chan-Gil;Hwang, Young-Ho;Yang, Hai-Won
    • Proceedings of the KIEE Conference
    • /
    • 2008.07a
    • /
    • pp.1712-1713
    • /
    • 2008
  • In this paper, we develop the vision control systems of auto tracking based on image processing techniques for video conference. The developed auto tracking vision control system consists of control hardware including vision, two dc motors and dc motor drivers. Image processing techniques are used to pixel of two images. Motion detection algorithm is applied to eliminate the noise. Experiment results are presented to illustrate the effectiveness and the applicability of the approaches proposed.

  • PDF

Autonomous Tractor for Tillage Operation Using Machine Vision and Fuzzy Logic Control (기계시각과 퍼지 제어를 이용한 경운작업 트랙터의 자율주행)

  • 조성인;최낙진;강인성
    • Journal of Biosystems Engineering
    • /
    • v.25 no.1
    • /
    • pp.55-62
    • /
    • 2000
  • Autonomous farm operation needs to be developed for safety, labor shortage problem, health etc. In this research, an autonomous tractor for tillage was investigated using machine vision and a fuzzy logic controller(FLC). Tractor heading and offset were determined by image processing and a geomagnetic sensor. The FLC took the tractor heading and offset as inputs and generated the steering angle for tractor guidance as output. A color CCD camera was used fro the image processing . The heading and offset were obtained using Hough transform of the G-value color images. 15 fuzzy rules were used for inferencing the tractor steering angle. The tractor was tested in the file and it was proved that the tillage operation could be done autonomously within 20 cm deviation with the machine vision and the FLC.

  • PDF

Computer simulation for seam tracking algorithm using laser vision sensor in robotic welding (레이저 비전 센서를 이용한 용접선 추적에 관한 시뮬레이션)

  • Jung, Taik-Min;Sung, Ki-Eun;Rhee, Se-Hun
    • Laser Solutions
    • /
    • v.13 no.2
    • /
    • pp.17-23
    • /
    • 2010
  • It is very important to track a complicate weld seam for the welding automation. Very recently, laser vision sensor becomes a useful sensing tool to find the seams. Until now, however studies of welding automation using a laser vision sensor, focused on either image processing or feature recognition from CCD camera. Even though it is possible to use a simple algorithm for tracking a simple seam, it is extremely difficult to develop a seam-tracking algorithm when the seam is more complex. To overcome these difficulties, this study introduces a simulation system to develop the seam tracking algorithm. This method was verified experimentally to reduce the time and effort to develop the seam tracking algorithm, and to implement the sensing device.

  • PDF

High speed seam tracking system using vision sensor with multi-line laser (다중 레이저 선을 이용한 비전 센서를 통한 고속 용접선 추적 시스템)

  • 성기은;이세헌
    • Proceedings of the KWS Conference
    • /
    • 2002.05a
    • /
    • pp.49-52
    • /
    • 2002
  • A vision sensor measure range data using laser light source, This sensor generally use patterned laser which shaped single line. But this vision sensor cannot satisfy new trend which needs faster and more precise processing. The sensor's sampling rate increases as reduced image processing time. However, the sampling rate can not over 30fps, because a camera has mechanical sampling limit. If we use multi line laser pattern, we will measure multi range data in one image. In the case of using same sampling rate camera, number of 2D range data profile in one second is directly proportional to laser line's number. For example, the vision sensor using 5 laser lines can sample 150 profiles per second in best condition.

  • PDF

Feasibility in Grading the Burley Type Dried Tobacco Leaf Using Computer Vision (컴퓨터 시각을 이용한 버얼리종 건조 잎 담배의 등급판별 가능성)

  • 조한근;백국현
    • Journal of Biosystems Engineering
    • /
    • v.22 no.1
    • /
    • pp.30-40
    • /
    • 1997
  • A computer vision system was built to automatically grade the leaf tobacco. A color image processing algorithm was developed to extract shape, color and texture features. An improved back propagation algorithm in an artificial neural network was applied to grade the Burley type dried leaf tobacco. The success rate of grading in three-grade classification(1, 3, 5) was higher than the rate of grading in six-grade classification(1, 2, 3, 4, 5, off), on the average success rate of both the twenty-five local pixel-set and the sixteen local pixel-set. And, the average grading success rate using both shape and color features was higher than the rate using shape, color and texture features. Thus, the texture feature obtained by the spatial gray level dependence method was found not to be important in grading leaf tobacco. Grading according to the shape, color and texture features obtained by machine vision system seemed to be inadequate for replacing manual grading of Burely type dried leaf tobacco.

  • PDF

A Hardware Implementation of Chain-coding Algorithm for Industrial Vision Systems (산업용 비젼시스템을 위한 하드웨어 체인코더의 설계)

  • Rhee, B.I.;Shin, Y.S.;Lim, J.;Bien, Z.
    • Proceedings of the KIEE Conference
    • /
    • 1987.07a
    • /
    • pp.265-269
    • /
    • 1987
  • In an industrial vision system, a coding technique for binary image is essential to extract useful informations. To reduce the processing time, a hardware implementation of the chain coding algorithm is attemped. For that purpose, the chain coding algorithm is modified so that it is more suitable for a hardware implementation. A hardwired chain coder is also developed and tested with developed vision system. The result shows that the processing time is greatly reduced and that the developed vision system is maybe feasible for real-time applications.

  • PDF

From Masked Reconstructions to Disease Diagnostics: A Vision Transformer Approach for Fundus Images (마스크된 복원에서 질병 진단까지: 안저 영상을 위한 비전 트랜스포머 접근법)

  • Toan Duc Nguyen;Gyurin Byun;Hyunseung Choo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.557-560
    • /
    • 2023
  • In this paper, we introduce a pre-training method leveraging the capabilities of the Vision Transformer (ViT) for disease diagnosis in conventional Fundus images. Recognizing the need for effective representation learning in medical images, our method combines the Vision Transformer with a Masked Autoencoder to generate meaningful and pertinent image augmentations. During pre-training, the Masked Autoencoder produces an altered version of the original image, which serves as a positive pair. The Vision Transformer then employs contrastive learning techniques with this image pair to refine its weight parameters. Our experiments demonstrate that this dual-model approach harnesses the strengths of both the ViT and the Masked Autoencoder, resulting in robust and clinically relevant feature embeddings. Preliminary results suggest significant improvements in diagnostic accuracy, underscoring the potential of our methodology in enhancing automated disease diagnosis in fundus imaging.