• Title/Summary/Keyword: Vision sensor

Search Result 822, Processing Time 0.03 seconds

Development of a Vision System for the Measurement of the Pendulum Test (진자검사 계측을 위한 영상 시스템의 개발)

  • Kim, Chul-Seung;Moon, Ki-Wook;Lee, Soo-Young;Eom, Gwang-Moon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.56 no.4
    • /
    • pp.817-819
    • /
    • 2007
  • The purpose of this work is to develop a measurement system of the pendulum test with minimal restriction of experimental environment and little influence of noise. In this work, we developed a vision system without any line between markers and a camera. The system performance is little influenced by the experimental environment, if light are sufficient to recognize markers. For the validation of the system, we compared knee joint angle trajectories measured by the developed system and by the magnetic sensor system during the nominal pendulum test and the maximum speed voluntary knee joint rotation. The joint angle trajectories of the developed system during both tests matched well with those of the magnetic system. Therefore, we suggest the vision system as an alternative to the previous systems with limited practicality for the pendulum test.

Application of the Vision Sensor for Weld Seam Tracking System in Large Vessel Fabrication

  • Park, Sang-Gu;Lee, Jee-Hyung;Ryu, Sang-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.56.3-56
    • /
    • 2002
  • For the weld quality improvement and the convenient operation of machines, laser vision system can be used to track weld seams on pressure vessels. There are many bad conditions to the weld grooves such as cutting error, gap variation of weld joint, and offset error of center line caused by misalignment. We developed a laser vision seam tracking system which consists of a laser vision sensor, a two axis positioning mechanism and a user interface program running on the Windows system. It was found that our system worked well for U, V and X shaped grooves. We used an industrial PC as the system controller to secure immunity to electrical noise and dust. We introduce here a simple and...

  • PDF

The Multipass Joint Tracking System by Vision Sensor (비전센서를 이용한 다층 용접선 추적 시스템)

  • Lee, Jeong-Ick;Koh, Byung-Kab
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.16 no.5
    • /
    • pp.14-23
    • /
    • 2007
  • Welding fabrication invariantly involves three district sequential steps: preparation, actual process execution and post-weld inspection. One of the major problems in automating these steps and developing autonomous welding system is the lack of proper sensing strategies. Conventionally, machine vision is used in robotic arc welding only for the correction of pre-taught welding paths in single pass. However, in this paper, multipass tracking more than single pass tracking is performed by conventional seam tracking algorithm and developed one. And tracking performances of two algorithm are compared in multipass tracking. As the result, tracking performance in multi-pass welding shows superior conventional seam tracking algorithm to developed one.

Autonomous Tractor for Tillage Operation Using Machine Vision and Fuzzy Logic Control (기계시각과 퍼지 제어를 이용한 경운작업 트랙터의 자율주행)

  • 조성인;최낙진;강인성
    • Journal of Biosystems Engineering
    • /
    • v.25 no.1
    • /
    • pp.55-62
    • /
    • 2000
  • Autonomous farm operation needs to be developed for safety, labor shortage problem, health etc. In this research, an autonomous tractor for tillage was investigated using machine vision and a fuzzy logic controller(FLC). Tractor heading and offset were determined by image processing and a geomagnetic sensor. The FLC took the tractor heading and offset as inputs and generated the steering angle for tractor guidance as output. A color CCD camera was used fro the image processing . The heading and offset were obtained using Hough transform of the G-value color images. 15 fuzzy rules were used for inferencing the tractor steering angle. The tractor was tested in the file and it was proved that the tillage operation could be done autonomously within 20 cm deviation with the machine vision and the FLC.

  • PDF

Fundamental research of the target tracking system using a CMOS vision chip for edge detection (윤곽 검출용 CMOS 시각칩을 이용한 물체 추적 시스템 요소 기술 연구)

  • Hyun, Hyo-Young;Kong, Jae-Sung;Shin, Jang-Kyoo
    • Journal of Sensor Science and Technology
    • /
    • v.18 no.3
    • /
    • pp.190-196
    • /
    • 2009
  • In a conventional camera system, a target tracking system consists of a camera part and a image processing part. However, in the field of the real time image processing, the vision chip for edge detection which was made by imitating the algorithm of humanis retina is superior to the conventional digital image processing systems because the human retina uses the parallel information processing method. In this paper, we present a high speed target tracking system using the function of the CMOS vision chip for edge detection.

A study on vision seam tracking system at lap joints (겹치기이음에서 용접선 시각 추적 시스템에 관한 연구)

  • 신정식;김재웅;나석주;최칠룡
    • Journal of Welding and Joining
    • /
    • v.9 no.2
    • /
    • pp.20-28
    • /
    • 1991
  • The main subject of this study is the construction of an automatic welding system that has the capability to trace the weld seam in GMA welding of lap joints. The system was composed of a vision sensor, moving torch, and personal computer(IBM-PC). In the developed vision sensor, an image was captured by the frame grabber at the time of short circuit during welding. The threshold method was adopted for determining the structured light and the central difference method for detecting the weld joint. And the seam tracing of the torch was performed by using the data regeneration algorithm. In this system using the image at the time of short circuit, weld seam tracking was performed without any relations to arc light and spatters.

  • PDF

Development of Intelligent Rain Sensing Algorithm for Vision-based Smart Wiper System (비전 기반 스마트 와이퍼 시스템을 위한 지능형 레인 센싱 알고리즘 개발)

  • Lee, Kyung-Chang;Kim, Man-Ho;Lee, Seok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.10 no.7
    • /
    • pp.649-657
    • /
    • 2004
  • A windshield wiper system plays a key part in assurance of driver's safety at rainfall. However, because quantity of rain and snow vary irregularly according to time and velocity of automotive, a driver changes speed and operation period of a wiper from time to time in order to secure enough visual field in the traditional windshield wiper system. Because a manual operation of wiper distracts driver's sensitivity and causes inadvertent driving, this is becoming direct cause of traffic accident. Therefore, this paper presents the basic architecture of vision-based smart wiper system and the rain sensing algorithm that regulate speed and interval of wiper automatically according to quantity of rain or snow. Also, this paper introduces the fuzzy wiper control algorithm based on human's expertise, and evaluates performance of suggested algorithm in the simulator model. Especially the vision sensor can measure wider area relatively than the optical rain sensor, hence, this grasps rainfall state more exactly in case disturbance occurs.

Augmented Feature Point Initialization Method for Vision/Lidar Aided 6-DoF Bearing-Only Inertial SLAM

  • Yun, Sukchang;Lee, Byoungjin;Kim, Yeon-Jo;Lee, Young Jae;Sung, Sangkyung
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.6
    • /
    • pp.1846-1856
    • /
    • 2016
  • This study proposes a novel feature point initialization method in order to improve the accuracy of feature point positions by fusing a vision sensor and a lidar. The initialization is a process that determines three dimensional positions of feature points through two dimensional image data, which has a direct influence on performance of a 6-DoF bearing-only SLAM. Prior to the initialization, an extrinsic calibration method which estimates rotational and translational relationships between a vision sensor and lidar using multiple calibration tools was employed, then the feature point initialization method based on the estimated extrinsic calibration parameters was presented. In this process, in order to improve performance of the accuracy of the initialized feature points, an iterative automatic scaling parameter tuning technique was presented. The validity of the proposed feature point initialization method was verified in a 6-DoF bearing-only SLAM framework through an indoor and outdoor tests that compare estimation performance with the previous initialization method.

Development of a SLAM System for Small UAVs in Indoor Environments using Gaussian Processes (가우시안 프로세스를 이용한 실내 환경에서 소형무인기에 적합한 SLAM 시스템 개발)

  • Jeon, Young-San;Choi, Jongeun;Lee, Jeong Oog
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.11
    • /
    • pp.1098-1102
    • /
    • 2014
  • Localization of aerial vehicles and map building of flight environments are key technologies for the autonomous flight of small UAVs. In outdoor environments, an unmanned aircraft can easily use a GPS (Global Positioning System) for its localization with acceptable accuracy. However, as the GPS is not available for use in indoor environments, the development of a SLAM (Simultaneous Localization and Mapping) system that is suitable for small UAVs is therefore needed. In this paper, we suggest a vision-based SLAM system that uses vision sensors and an AHRS (Attitude Heading Reference System) sensor. Feature points in images captured from the vision sensor are obtained by using GPU (Graphics Process Unit) based SIFT (Scale-invariant Feature Transform) algorithm. Those feature points are then combined with attitude information obtained from the AHRS to estimate the position of the small UAV. Based on the location information and color distribution, a Gaussian process model is generated, which could be a map. The experimental results show that the position of a small unmanned aircraft is estimated properly and the map of the environment is constructed by using the proposed method. Finally, the reliability of the proposed method is verified by comparing the difference between the estimated values and the actual values.

High Speed Self-Adaptive Algorithms for Implementation in a 3-D Vision Sensor (3-D 비젼센서를 위한 고속 자동선택 알고리즘)

  • Miche, Pierre;Bensrhair, Abdelaziz;Lee, Sang-Goog
    • Journal of Sensor Science and Technology
    • /
    • v.6 no.2
    • /
    • pp.123-130
    • /
    • 1997
  • In this paper, we present an original stereo vision system which comprises two process: 1. An image segmentation algorithm based on new concept called declivity and using automatic thresholds. 2. A new stereo matching algorithm based on an optimal path search. This path is obtained by dynamic programming method which uses the threshold values calculated during the segmentation process. At present, a complete depth map of indoor scene only needs about 3 s on a Sun workstation IPX, and this time will be reduced to a few tenth of second on a specialised architecture based on several DSPs which is currently under consideration.

  • PDF