• Title/Summary/Keyword: Laser Vision

Search Result 334, Processing Time 0.025 seconds

A Study on the NC Embedding of Vision System for Tool Breakage Detection (공구파손감지용 비젼시스템의 NC실장에 관한 연구)

  • 이돈진;김선호;안중환
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2002.05a
    • /
    • pp.369-372
    • /
    • 2002
  • In this research, a vision system for detecting tool breakage which is hardly detected by such indirect in-process measurement method as acoustic emission, cutting torque and motor current was developed and embedded into a PC-NC system. The vision system consists of CMOS image sensors, a slit beam laser generator and an image grabber board. Slit beam laser was emitted on the tool surface to separate the tool geometry well from the various obstacles surrounding the tool. An image of tool is captured through two steps of signal processing, that is, median filtering and thresholding and then the tool is estimated normal or broken by use of change of the centroid of the captured image. An air curtain made by the jetting high-pressure air in front of the lens was devised to prevent the vision system from being contaminated by scattered coolant, cutting chips in cutting process. To embed the vision system to a Siemens PC-NC controller 840D NC, an HMI(Human Machine Interface) program was developed under the Windows 95 operating system of MMC103. The developed HMI is placed in a sub window of the main window of 840D and this program can be activated or deactivated either by a soft key on the operating panel or M codes in the NC part program. As the tool breakage is detected, the HMI program emit a command for automatic tool change or send alarm to the NC kernel. Evaluation test in a high speed tapping center showed the developed system was successful in detection of the small-radius tool breakage.

  • PDF

A STUDY ON WELD POOL MONITORING IN PULSED LASER EDGE WELDING

  • Lee, Seung-Key;Na, Suck-Joo
    • Proceedings of the KWS Conference
    • /
    • 2002.10a
    • /
    • pp.595-599
    • /
    • 2002
  • Edge welding of thin sheets is very difficult because of the fit-up problem and small weld area In laser welding, joint fit-up and penetration are critical for sound weld quality, which can be monitored by appropriate methods. Among the various monitoring systems, visual monitoring method is attractive because various kinds of weld pool information can be extracted directly. In this study, a vision sensor was adopted for the weld pool monitoring in pulsed Nd:YAG laser edge welding to monitor whether the penetration is enough and the joint fit-up is within the requirement. Pulsed Nd:YAG laser provides a series of periodic laser pulses, while the shape and brightness of the weld pool change temporally even in one pulse duration. The shutter-triggered and non-interlaced CCD camera was used to acquire a temporally changed weld pool image at the moment representing the weld status well. The information for quality monitoring can be extracted from the monitored weld pool image by an image processing algorithm. Weld pool image contains not only the information about the joint fit-up, but the penetration. The information about the joint fit-up can be extracted from the weld pool shape, and that about a penetration from the brightness. Weld pool parameters that represent the characteristics of the weld pool were selected based on the geometrical appearance and brightness profile. In order to achieve accurate prediction of the weld penetration, which is nonlinear model, neural network with the selected weld pool parameters was applied.

  • PDF

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

Integrated System for Autonomous Proximity Operations and Docking

  • Lee, Dae-Ro;Pernicka, Henry
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.12 no.1
    • /
    • pp.43-56
    • /
    • 2011
  • An integrated system composed of guidance, navigation and control (GNC) system for autonomous proximity operations and the docking of two spacecraft was developed. The position maneuvers were determined through the integration of the state-dependent Riccati equation formulated from nonlinear relative motion dynamics and relative navigation using rendezvous laser vision (Lidar) and a vision sensor system. In the vision sensor system, a switch between sensors was made along the approach phase in order to provide continuously effective navigation. As an extension of the rendezvous laser vision system, an automated terminal guidance scheme based on the Clohessy-Wiltshire state transition matrix was used to formulate a "V-bar hopping approach" reference trajectory. A proximity operations strategy was then adapted from the approach strategy used with the automated transfer vehicle. The attitude maneuvers, determined from a linear quadratic Gaussian-type control including quaternion based attitude estimation using star trackers or a vision sensor system, provided precise attitude control and robustness under uncertainties in the moments of inertia and external disturbances. These functions were then integrated into an autonomous GNC system that can perform proximity operations and meet all conditions for successful docking. A six-degree of freedom simulation was used to demonstrate the effectiveness of the integrated system.

3D Printed Flexible Cathode Based on Cu-EDTA that Prepared by Molecular Precursor Method and Microwave Processing for Electrochemical Machining

  • Yan, Binggong;Song, Xuan;Tian, Zhao;Huang, Xiaodi;Jiang, Kaiyong
    • Journal of Electrochemical Science and Technology
    • /
    • v.11 no.2
    • /
    • pp.180-186
    • /
    • 2020
  • In this work, a metal-ligand solution (Cu-EDTA) was prepared based on the molecular precursor method and the solution was spin-coated onto 3D printed flexible photosensitive resin sheets. After being processed by microwave, a laser with a wavelength of 355 nm was utilized to scan the spin-coated sheets and then the sheets were immersed in an electroless copper plating solution to deposit copper wires. With the help of microwave processing, the adhesion between copper wires and substrate was improved which should result from the increase of roughness, decrease of contact angle and the consistent orientation of coated film according to the results of 3D profilometer and SEM. XPS results showed that copper seeds formed after laser scanning. Using the 3D printed flexible sheets as cathode and galvanized iron as anode, electrochemical machining was conducted.

A Study on Automatic Seam Tracking using Vision Sensor (비전센서를 이용한 용접선 자동추적에 관한 연구)

  • 조택동;양상민;전진환
    • Journal of Welding and Joining
    • /
    • v.16 no.6
    • /
    • pp.68-76
    • /
    • 1998
  • A CCD camera with a laser stripe was applied to realized the automatic weld seam tracking. The 3-dimensional information obtained from the vision system made it possible to generate the weld torch path. The adaptive Hough transformation was used to extract laser stripes an to obtain specific weld points. It takes relatively long time to process image on-line control using the basic control using the basic Hough transformation, but it has a tendency of robustness over the noises such as spatter. For this reason, it was complemented with adaptive Hough transformation to have an on-line processing ability for scanning specific weld points. The dead zone, where the sensing of weld line is impossible, was eliminated by rotating the camera with its rotating axis centered at the weld torch. When weld lines were detected, the camera angle was controlled in order to get the minimum image data for sensing of weld lines. Consequently, the image processing time was reduced.

  • PDF

The Position Estimation of a Body Using 2-D Slit Light Vision Sensors (2-D 슬리트광 비젼 센서를 이용한 물체의 자세측정)

  • Kim, Jung-Kwan;Han, Myung-Chul
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.12
    • /
    • pp.133-142
    • /
    • 1999
  • We introduce the algorithms of 2-D and 3-D position estimation using 2-D vision sensors. The sensors used in this research issue red laser slit light to the body. So, it is very convenient to obtain the coordinates of corner point or edge in sensor coordinate. Since the measured points are normally not fixed in the body coordinate, the additional conditions, that corner lines or edges are straight and fixed in the body coordinate, are used to find out the position and orientation of the body. In the case of 2-D motional body, we can find the solution analytically. But in the case of 3-D motional body, linearization technique and least mean squares method are used because of hard nonlinearity.

  • PDF

A Study on Automatic Seam Tracking using Vision Sensor (비전센서를 이용한 자동추적장치에 관한 연구)

  • 전진환;조택동;양상민
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.1105-1109
    • /
    • 1995
  • A CCD-camera, which is structured with vision system, was used to realize automatic seam-tracking system and 3-D information which is needed to generate torch path, was obtained by using laser-slip beam. To extract laser strip and obtain welding-specific point, Adaptive Hough-transformation was used. Although the basic Hough transformation takes too much time to process image on line, it has a tendency to be robust to the noises as like spatter. For that reson, it was complemented with Adaptive Hough transformation to have an on-line processing ability for scanning a welding-specific point. the dead zone,where the sensing of weld line is impossible, is eliminated by rotating the camera with its rotating axis centered at welding torch. The camera angle is controlled so as to get the minimum image data for the sensing of weld line, hence the image processing time is reduced. The fuzzy controller is adapted to control the camera angle.

  • PDF

Development of a vision sensor for measuring the weld groove parameters in arc welding process (자동 아크 용접공정의 용접개선변수 측정을 위한 시각 시스템)

  • 김호학;부광석;조형석
    • Journal of Welding and Joining
    • /
    • v.8 no.2
    • /
    • pp.58-69
    • /
    • 1990
  • In conventional arc welding, position error of the weld torch with respect to the weld seam and variation of groove dimension are induced by inaccurate fitup and fixturing. In this study, a vision system has been developed to recognize and compensate the position error and dimensional inaccuracy. The system uses a structured laser light illuminated on the weld groove and perceived by a C.C.D camera. A new algorithm to detect the edge of the reflected laser light is introduced for real time processing. The developed system was applied to arbitarary weld paths with various types of joint in arc welding process. The experimental results show that the proposed system can detect the weld groove parameters within good accuracy and yield good tracking performance.

  • PDF

Implementation of vision system for a mobile robot using pulse phase difference & structured light (펄스 위상차와 스트럭춰드 라이트를 이용한 이동 로봇 시각 장치 구현)

  • 방석원;정명진;서일홍;오상록
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10a
    • /
    • pp.652-657
    • /
    • 1991
  • Up to date, application areas of mobile robots have been expanded. In addition, Many types of LRF(Laser Range Finder) systems have been developed to acquire three dimensional information about unknown environments. However in real world, because of various noises (sunlight, fluorescent light), it is difficult to separate reflected laser light from these noise. To overcome the previous restriction, we have developed a new type vision system which enables a mobile robot to measure the distance to a object located 1-5 (m) ahead with an error than 2%. The separation and detection algorithm used in this system consists of pulse phase difference method and multi-stripe structured light. The effectiveness and feasibility of the proposed vision system are demonstrated by 3-D maps of detected objects and computation time analysis.

  • PDF