• Title/Summary/Keyword: Laser Vision Sensor

Search Result 171, Processing Time 0.028 seconds

Real-time Recognition of the Terrain Configuration to Increase Driving Stability for Unmanned Robots (안정성 향상을 위한 자율 주행 로봇의 실시간 접촉 지면 형상인식)

  • Jeon, Bongsoo;Kim, Jayoung;Lee, Jihong
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.4
    • /
    • pp.283-291
    • /
    • 2013
  • Methods for measuring or estimating of ground shape by a laser range finder and a vision sensor(exteroceptive sensors) have critical weakness in terms that these methods need prior database built to distinguish acquired data as unique surface condition for driving. Also, ground information by exteroceptive sensors does not reflect the deflection of ground surface caused by the movement of UGVs. Thereby, UGVs have some difficulties regarding to finding optimal driving conditions for maximum maneuverability. Therefore, this paper proposes a method of recognizing exact and precise ground shape using Inertial Measurement Unit(IMU) as a proprioceptive sensor. In this paper, firstly this method recognizes attitude of a robot in real-time using IMU and compensates attitude data of a robot with angle errors through analysis of vehicle dynamics. This method is verified by outdoor driving experiments of a real mobile robot.

Process Automation of Gas Metal Arc Welding Using Artificial Neural Network (인공신경회로망을 이용한 GMA 용접의 공정자동화)

  • 조만호;양상민;김옥현
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2002.10a
    • /
    • pp.558-561
    • /
    • 2002
  • A CCD camera with a laser strip was applied to realize the automation of welding Process in GMAW. It takes relatively long time to process image on-line control using the basic Hough transformation, but it has a tendency of robustness over the noise such spatter and arc light. The adaptive Hough transformation was used to extract the laser stripe and to obtain specific weld points In this study, a neural network based on the generalized delta rule algorithm was adapted for the process control of GMA, such as welding speed, arc voltage and wire feeding speed.

  • PDF

Seam Tracking System in a Laser Welding -Inductive & Laser vision sensor- (레이저 용접에서의 용접선 추적 장치)

  • 윤충섭;양상민;박희창;한유희
    • Journal of Welding and Joining
    • /
    • v.12 no.2
    • /
    • pp.28-38
    • /
    • 1994
  • 고속 연속용접시 필연적으로 발생하는 용접선의 오차를 감소하는 방법으로 용접선 추적을 하게 되는데, 본 연구에서는 근래에 상용화된 전자기식 방식과 레이저 시각 방식을 이용하여 용접시 발생할 수 있는 용접선의 형태에 대하여 용접선 추적을 실험하였다. 이 시험을 통하여 각각의 방식에는 고유의 특성을 가지고 있음을 알았다. 효율적인 용접 자동화 시스템을 구성하기 위하여, 용접하고자 하는 대상물 및 시스템에 대하여 적절한 센서의 선택이 선행되어져야 할 것이다. 본 연구실은 수행중인 레이저 용접 자동화 시스템 구성을 할 예정이다. 끝으로 위와 같은 용접선 추적장치는 전량을 수입에 의존하고 있는 실정이다. 보편적인 센서 시스템의 개발보다는 구축 하고자 하는 시스템에 알맞는 센서의 개발이 선행되어야 할 것으로 생각된다.

  • PDF

A Study for Vision-based Estimation Algorithm of Moving Target Using Aiming Unit of Unguided Rocket (무유도 로켓의 조준 장치를 이용한 영상 기반 이동 표적 정보 추정 기법 연구)

  • Song, Jin-Mo;Lee, Sang-Hoon;Do, Joo-Cheol;Park, Tai-Sun;Bae, Jong-Sue
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.20 no.3
    • /
    • pp.315-327
    • /
    • 2017
  • In this paper, we present a method for estimating of position and velocity of a moving target by using the range and the bearing measurements from multiple sensors of aiming unit. In many cases, conventional low cost gyro sensor and a portable laser range finder(LRF) degrade the accuracy of estimation. To enhance these problems, we propose two methods. The first is background image tracking and the other is principal component analysis (PCA). The background tracking is used to assist the low cost gyro censor. And the PCA is used to cope with the problems of a portable LRF. In this paper, we prove that our method is robust with respect to low-frequency, biased and noisy inputs. We also present a comparison between our method and the extended Kalman filter(EKF).

Self-localization of a Mobile Robot for Decreasing the Error and VRML Image Overlay (오차 감소를 위한 이동로봇 Self-Localization과 VRML 영상오버레이 기법)

  • Kwon Bang-Hyun;Shon Eun-Ho;Kim Young-Chul;Chong Kil-To
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.4
    • /
    • pp.389-394
    • /
    • 2006
  • Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-localization, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach. The goal of our research is to measure more exact robot location by matching between built VRML 3D model and real vision image. To determine the position of mobile robot, landmark-localization technique has been applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-localization, the 2D scene of the vision is overlaid with the VRML scene.

VRML image overlay method for Robot's Self-Localization (VRML 영상오버레이기법을 이용한 로봇의 Self-Localization)

  • Sohn, Eun-Ho;Kwon, Bang-Hyun;Kim, Young-Chul;Chong, Kil-To
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.318-320
    • /
    • 2006
  • Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-localization, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach. The goal of our research is to measure more exact robot location by matching between built VRML 3D model and real vision image. To determine the position of mobile robot, landmark-localitzation technique has been applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-localization, the 2D scene of the vision is overlaid with the VRML scene.

  • PDF

Development of Wideband Frequency Modulated Laser for High Resolution FMCW LiDAR Sensor (고분해능 FMCW LiDAR 센서 구성을 위한 광대역 주파수변조 레이저 개발)

  • Jong-Pil La;Ji-Eun Choi
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1023-1030
    • /
    • 2023
  • FMCW LiDAR system with robust target detection capabilities even under adverse operating conditions such as snow, rain, and fog is addressed in this paper. Our focus is primarily on enhancing the performance of FMCW LiDAR by improving the characteristics of the frequency-modulated laser, which directly influence range resolution, coherence length, and maximum measurement range etc. of LiDAR. We describe the utilization of an unbalanced Mach-Zehnder laser interferometer to measure real-time changes of the lasing frequency and to correct frequency modulation errors through an optical phase-locked loop technique. To extend the coherence length of laser, we employ an extended-cavity laser diode as the laser source and implement a laser interferometer with an photonic integrated circuit for miniaturization of optical system. The developed FMCW LiDAR system exhibits a bandwidth of 10.045GHz and a remarkable distance resolution of 0.84mm.

A Study on the Seam Tracking by Using Vision Sensor (비전센서를 이용한 용접선 추적에 관한 연구)

  • 배철오;김현수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.8
    • /
    • pp.1374-1380
    • /
    • 2002
  • Recently, the use of Robot increase little by little for the purpose of developing a welding quality and productivity in the welding part. It is more important to contact the seam for arc welding before moving a welding robot. There are two types of method to contact the seam namely contact and non-contact type largely. In this paper, image processing sensor(a kind of non-contact sensor) is concerned to track the seam by using laser diode and CCD camera. A structured laser diode's light illuminated on the weld groove and the reflected shape is introduced by CCD camera. The image board captures this image and software analyzes this image. The robot is moved and welded exactly as acquired image X-Y data is changed with robot's X-Y value. Also, most of seam tracking are considered by changing the program simply in case of the different weld groove of plane surface.

Measurement of two-dimensional vibration and calibration using the low-cost machine vision camera (저가의 머신 비전 카메라를 이용한 2차원 진동의 측정 및 교정)

  • Kim, Seo Woo;Ih, Jeong-Guon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.37 no.2
    • /
    • pp.99-109
    • /
    • 2018
  • The precision of the vibration-sensors, contact or non-contact types, is usually satisfactory for the practical measurement applications, but a sensor is confined to the measurement of a point or a direction. Although the precision and frequency span of the low-cost camera are inferior to these sensors, it has the merits in the cost and in the capability of simultaneous measurement of a large vibrating area. Furthermore, a camera can measure multi-degrees-of-freedom of a vibrating object simultaneously. In this study, the calibration method and the dynamic characteristics of the low-cost machine vision camera as a sensor are studied with a demonstrating example of the two-dimensional vibration of a cantilever beam. The planar image of the camera shot reveals two rectilinear and one rotational motion. The rectilinear vibration motion of a single point is first measured using a camera and the camera is experimentally calibrated by calculating error referencing the LDV (Laser Doppler Vibrometer) measurement. Then, by measuring the motion of multiple points at once, the rotational vibration motion and the whole vibration motion of the cantilever beam are measured. The whole vibration motion of the cantilever beam is analyzed both in time and frequency domain.

Depth Evaluation from Pattern Projection Optimized for Automated Electronics Assembling Robots

  • Park, Jong-Rul;Cho, Jun Dong
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.3 no.4
    • /
    • pp.195-204
    • /
    • 2014
  • This paper presents the depth evaluation for object detection by automated assembling robots. Pattern distortion analysis from a structured light system identifies an object with the greatest depth from its background. An automated assembling robot should prior select and pick an object with the greatest depth to reduce the physical harm during the picking action of the robot arm. Object detection is then combined with a depth evaluation to provide contour, showing the edges of an object with the greatest depth. The contour provides shape information to an automated assembling robot, which equips the laser based proxy sensor, for picking up and placing an object in the intended place. The depth evaluation process using structured light for an automated electronics assembling robot is accelerated for an image frame to be used for computation using the simplest experimental set, which consists of a single camera and projector. The experiments for the depth evaluation process required 31 ms to 32 ms, which were optimized for the robot vision system that equips a 30-frames-per-second camera.