• 제목/요약/키워드: line CCD sensor

검색결과 39건 처리시간 0.111초

비전센서를 이용한 자동추적장치에 관한 연구 (A Study on Automatic Seam Tracking using Vision Sensor)

  • 전진환;조택동;양상민
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 1995년도 추계학술대회 논문집
    • /
    • pp.1105-1109
    • /
    • 1995
  • A CCD-camera, which is structured with vision system, was used to realize automatic seam-tracking system and 3-D information which is needed to generate torch path, was obtained by using laser-slip beam. To extract laser strip and obtain welding-specific point, Adaptive Hough-transformation was used. Although the basic Hough transformation takes too much time to process image on line, it has a tendency to be robust to the noises as like spatter. For that reson, it was complemented with Adaptive Hough transformation to have an on-line processing ability for scanning a welding-specific point. the dead zone,where the sensing of weld line is impossible, is eliminated by rotating the camera with its rotating axis centered at welding torch. The camera angle is controlled so as to get the minimum image data for the sensing of weld line, hence the image processing time is reduced. The fuzzy controller is adapted to control the camera angle.

  • PDF

I형 맞대기 용접선 추적용 시각센서 시스템에 관한 연구 (A Study on the Vision Sensor System for Tracking the I-Butt Weld Joints)

  • 배희수;김재웅
    • 한국정밀공학회지
    • /
    • 제18권9호
    • /
    • pp.179-185
    • /
    • 2001
  • In this study, a visual sensor system for weld seam tracking the I-butt weld joints in GMA welding was constructed. The sensor system consists of a CCD camera, a diode laser with a cylindrical lens and a band-pass-filter to overcome the degrading of image due to spatters and arc light. In order to obtain the enhanced image, quantitative relationship between laser intensity and iris number was investigated. Throughout the repeated experiments, the shutter speed was set at 1-milisecond for minimizing the effect of spatters on the image, and therefore most of the spatter trace in the image have been found to be reduced. Region of interest was defined from the entire image and gray level of searched laser line was compared to that of weld line. The differences between these gray levels lead to spot the position of weld joint using central difference method. The results showed that, as long as weld line was within $^\pm$15$^\circ$from the longitudinal straight fine, the system constructed in this study could track the weld line successful1y. Since the processing time reduced to 0.05 sec, it is expected that the developed method could be adopted to high speed welding such as laser welding.

  • PDF

A Study on a Vision Sensor System for Tracking the I-Butt Weld Joints

  • Kim Jae-Woong;Bae Hee-Soo
    • Journal of Mechanical Science and Technology
    • /
    • 제19권10호
    • /
    • pp.1856-1863
    • /
    • 2005
  • In this study, a visual sensor system for weld seam tracking the I-butt weld joints in GMA welding was constructed. The sensor system consists of a CCD camera, a diode laser with a cylindrical lens and a band-pass-filter to overcome the degrading of image due to spatters and arc light. In order to obtain the enhanced image, quantitative relationship between laser intensity and iris opening was investigated. Throughout the repeated experiments, the shutter speed was set at 1/1000 second for minimizing the effect of spatters on the image, and therefore the image without the spatter traces could be obtained. Region of interest was defined from the entire image and gray level of the searched laser stripe was compared to that of weld line. The differences between these gray levels lead to spot the position of weld joint using central difference method. The results showed that, as long as weld line is within $\pm15^{o}$ from the longitudinal straight line, the system constructed in this study could track the weld line successfully. Since the processing time is no longer than 0.05 sec, it is expected that the developed method could be adopted to high speed welding such as laser welding.

비전 센서를 이용한 AGV의 주행정보 획득에 관한 연구 (A Study for Detecting AGV Driving Information using Vision Sensor)

  • 이진우;손주한;최성욱;이영진;이권순
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2000년도 하계학술대회 논문집 D
    • /
    • pp.2575-2577
    • /
    • 2000
  • We experimented on AGV driving test with color CCD camera which is setup on it. This paper can be divided into two parts. One is image processing part to measure the condition of the guideline and AGV. The other is part that obtains the reference steering angle through using the image processing parts. First, 2 dimension image information derived from vision sensor is interpreted to the 3 dimension information by the angle and position of the CCD camera. Through these processes, AGV knows the driving conditions of AGV. After then using of those information, AGV calculates the reference steering angle changed by the speed of AGV. In the case of low speed, it focuses on the left/right error values of the guide line. As increasing of the speed of AGV, it focuses on the slop of guide line. Lastly, we are to model the above descriptions as the type of PID controller and regulate the coefficient value of it the speed of AGV.

  • PDF

진화전략 알고리즘을 이용한 AGV 조향제어에 관한 연구 (A Study for AGV Steering Control using Evolution Strategy)

  • 이진우;손주한;최성욱;이영진;이권순
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.149-149
    • /
    • 2000
  • We experimented on AGV driving test with color CCD camera which is setup on it. This paper can be divided into two parts. One is image processing part to measure the condition of the guideline and AGV. The other is part that obtains the reference steering angle through using the image processing parts. First, 2 dimension image information derived from vision sensor is interpreted to the 3 dimension information by the angle and position of the CCD camera. Through these processes, AGV knows the driving conditions of AGV. After then using of those information, AGV calculates the reference steering angle changed by the speed of AGV. In the case of low speed, it focuses on the left/right error values of the guide line. As increasing of the speed of AGV, it focuses on the slop of guide line. Lastly, we are to model the above descriptions as the type of PID controller and regulate the coefficient value of it the speed of AGV.

  • PDF

3D Map Building of The Mobile Robot Using Structured Light

  • Lee, Oon-Kyu;Kim, Min-Young;Cho, Hyung-Suck;Kim, Jae-Hoon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.123.1-123
    • /
    • 2001
  • For Autonomous navigation of the mobile robots, the robots' capability to recognize 3D environment is necessary. In this paper, an on-line 3D map building method for autonomous mobile robots is proposed. To get range data on the environment, we use an sensor system which is composed of a structured light and a CCD camera based on optimal triangulation. The structured laser is projected as a horizontal strip on the scene. The sensor system can rotate $\pm$ $30{\Circ}$ with a goniometer. Scanning the system, we get the laser strip image for the environments and update planes composing the environment by some image processing steps. From the laser strip on the captured image, we find a center point of each column, and make line segments through blobbing these center poings. Then, the planes of the environments are updated. These steps are done on-line in scanning phase. With the proposed method, we can efficiently get a 3D map about the structured environment.

  • PDF

3D Map Building of the Mobile Robot Using Structured Light

  • Lee, Oon-Kyu;Kim, Min-Young;Cho, Hyung-Suck;Kim, Jae-Hoon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.123.5-123
    • /
    • 2001
  • For autonomous navigation of the mobile robots, the robots' capability to recognize 3D environment is necessary. In this paper, an on-line 3D map building method for autonomous mobile robots is proposed. To get range data on the environment, we use a sensor system which is composed of a structured light and a CCD camera based on optimal triangulation. The structured laser is projected as a horizontal strip on the scene. The sensor system can rotate$\pm$30$^{\circ}$ with a goniometer. Scanning the system, we get the laser strip image for the environments and update planes composing the environment by some image processing steps. From the laser strip on the captured image, we find a center point of each column, and make line segments through blobbing these center points. Then, the planes of the environments are updated. These steps are done on-line in scanning phase. With the proposed method, we can efficiently get a 3D map about the structured environment.

  • PDF

LS-SVM을 이용한 TFT-LCD 패널 내의 결함 검사 방법 (A Defect Inspection Method in TFT-LCD Panel Using LS-SVM)

  • 최호형;이건희;김자근;주영복;최병재;박길흠;윤병주
    • 한국지능시스템학회논문지
    • /
    • 제19권6호
    • /
    • pp.852-859
    • /
    • 2009
  • TFT-LCD 자동 검사 시스템에서 결함 검출을 위한 영상은 라인 스캔 카메라(line scan camera)나 에어리어 스캔 카메라 (area scan camera)에 의해서 획득하게 된다. 그러나 임펄스 잡음과 가우시안 잡음, CCD 혹은 CMOS 센서의 한계, 조명등의 영향으로 열화된 영상이 획득되며, 한도성 결함 영역을 인간의 육안으로 구분하기 어렵게 된다. 본 논문에서는 효율적인 결함 검출을 위해 특징 추출 방법과 결함 검출 방법을 제안한다. 특징 벡터로 웨버의 법칙을 이용한 결함 영역과 주변 배경 영역의 평균 밝기 차와 주변 배경 영역의 밝기 변화를 이용한 표준편차를 이용하며, 결함 영역 검출를 위해 추출된 특징 벡터를 이용하여 비선형 SVM을 적용한다. 실험 결과는 제안한 방법이 다른 방법들 보다 성능이 우수함을 보여준다.

GMAW에서 시각센서를 이용한 용접선 정보의 추출과 와이어 승급속도의 제어에 관한 연구 (A Study on Weld Line Detection and Wire Feeding Rate Control in GMAW with Vision Sensor)

  • 조택동;김옥현;양상민;조만호
    • Journal of Welding and Joining
    • /
    • 제19권6호
    • /
    • pp.600-607
    • /
    • 2001
  • A CCD camera with a laser stripe was applied to realize the automatic weld seam tracking in GMAW. It takes relatively long time to process image on-line control using the basic Hough transformation, but it has a tendency of robustness over the noises such as spatter and arc light. For this reason. it was complemented with adaptive Hough transformation to have an on-line processing ability for scanning specific weld points. The adaptive Hough transformation was used to extract laser stripes and to obtain specific weld points. The 3-dimensional information obtained from the vision system made it possible to generate the weld torch path and to obtain the information such as width and depth of weld line. We controled the wire feeding rate using informations of weld line.

  • PDF

TDI CMOS 센서를 이용한 인공위성 탑재용 전자광학 카메라의 Motion Blur 최소화 방법 및 Dynamic MTF 성능 분석 (Minimization of Motion Blur and Dynamic MTF Analysis in the Electro-Optical TDI CMOS Camera on a Satellite)

  • 허행팔;나성웅
    • 대한원격탐사학회지
    • /
    • 제31권2호
    • /
    • pp.85-99
    • /
    • 2015
  • 저궤도 지구관측위성에 탑재되는 전자광학 카메라는 높은 SNR 및 MTF 성능 요구조건을 만족시키기 위하여, TDI 기능이 포함된 CCD 센서를 사용하는 것이 일반적이다. 그러나, CMOS 센서가 가진 다양한 장점을 활용하기 위하여 CMOS 센서에도 TDI 기능이 추가되고 있으며, CMOS 센서의 취약점 중의 하나인 motion blur 문제를 개선하기 위한 다양한 방법들이 제시되고 있다. CMOS 센서에서도 CCD 센서의 multiphased clocking 방법과 유사하게, 하나의 픽셀을 다수의 서브픽셀로 나누어 각각을 별도로 읽어내거나, 픽셀 사이에 인위적인 마스크을 삽입하기도 한다. 또한, 노출시간(integration time)을 라인타임보다 짧게 하여, TDI CMOS 카메라 시스템의 motion blur를 최소화 할 수도 있다. 노출시간을 조절하는 방법을 적용함으로써, 카메라 제어 유닛의 명령을 통하여, 각각의 촬영임무의 목적에 맞도록, SNR 우선 영상 또는 MTF 우선 영상을 선택적으로 획득하는 것이 가능하다. 본 논문에서는 노출시간을 조절하여 motion blur를 최소화 하는 방법에 대해 분석한 결과를 기술하고, MATLAB 시뮬레이션을 통하여 확인된 영상품질(dynamic MTF)의 개선 정도를 정리하였다.