• Title/Summary/Keyword: laser structured light image

Search Result 40, Processing Time 0.027 seconds

Robust Depth Measurement Using Dynamic Programming Technique on the Structured-Light Image (구조화 조명 영상에 Dynamic Programming을 사용한 신뢰도 높은 거리 측정 방법)

  • Wang, Shi;Kim, Hyong-Suk;Lin, Chun-Shin;Chen, Hong-Xin;Lin, Hai-Ping
    • Journal of Internet Computing and Services
    • /
    • v.9 no.3
    • /
    • pp.69-77
    • /
    • 2008
  • An algorithm for tracking the trace of structured light is proposed to obtain depth information accurately. The technique is based on the fact that the pixel location of light in an image has a unique association with the object depth. However, sometimes the projected light is dim or invisible due to the absorption and reflection on the surface of the object. A dynamic programming approach is proposed to solve such a problem. In this paper, necessary mathematics for implementing the algorithm is presented and the projected laser light is tracked utilizing a dynamic programming technique. Advantage is that the trace remains integrity while many parts of the laser beam are dim or invisible. Experimental results as well as the 3-D restoration are reported.

  • PDF

Depth Evaluation from Pattern Projection Optimized for Automated Electronics Assembling Robots

  • Park, Jong-Rul;Cho, Jun Dong
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.3 no.4
    • /
    • pp.195-204
    • /
    • 2014
  • This paper presents the depth evaluation for object detection by automated assembling robots. Pattern distortion analysis from a structured light system identifies an object with the greatest depth from its background. An automated assembling robot should prior select and pick an object with the greatest depth to reduce the physical harm during the picking action of the robot arm. Object detection is then combined with a depth evaluation to provide contour, showing the edges of an object with the greatest depth. The contour provides shape information to an automated assembling robot, which equips the laser based proxy sensor, for picking up and placing an object in the intended place. The depth evaluation process using structured light for an automated electronics assembling robot is accelerated for an image frame to be used for computation using the simplest experimental set, which consists of a single camera and projector. The experiments for the depth evaluation process required 31 ms to 32 ms, which were optimized for the robot vision system that equips a 30-frames-per-second camera.

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

Localization of Mobile Robot Using Active Omni-directional Ranging System (능동 전방향 거리 측정 시스템을 이용한 이동로봇의 위치 추정)

  • Ryu, Ji-Hyung;Kim, Jin-Won;Yi, Soo-Yeong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.5
    • /
    • pp.483-488
    • /
    • 2008
  • An active omni-directional raging system using an omni-directional vision with structured light has many advantages compared to the conventional ranging systems: robustness against external illumination noise because of the laser structured light and computational efficiency because of one shot image containing $360^{\circ}$ environment information from the omni-directional vision. The omni-directional range data represents a local distance map at a certain position in the workspace. In this paper, we propose a matching algorithm for the local distance map with the given global map database, thereby to localize a mobile robot in the global workspace. Since the global map database consists of line segments representing edges of environment object in general, the matching algorithm is based on relative position and orientation of line segments in the local map and the global map. The effectiveness of the proposed omni-directional ranging system and the matching are verified through experiments.

Depth Measurement System Using Structured Light, Rotational Plane Mirror and Mono-Camera (선형 레이저와 회전 평면경 및 단일 카메라를 이용한 거리측정 시스템)

  • Yoon Chang-Bae;Kim Hyong-Suk;Lin Chun-Shin;Son Hong-Rak;Lee Hye-Jeong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.5
    • /
    • pp.406-410
    • /
    • 2005
  • A depth measurement system that consists of a single camera, a laser light source and a rotating mirror is investigated. The camera and the light source are fixed, facing the rotating mirror. The laser light is reflected by the mirror and projected to the scene objects whose locations are to be determined. The camera detects the laser light location on object surfaces through the same mirror. The scan over the area to be measured is done by mirror rotation. Advantages are 1) the image of the light stripe remains sharp while that of the background becomes blurred because of the mirror rotation and 2) the only rotating part of this system is the mirror but the mirror angle is not involved in depth computation. This minimizes the imprecision caused by a possible inaccurate angle measurement. The detail arrangement and experimental results are reported.

A Study on the Image Processing of Visual Sensor for Weld Seam Tracking in GMA Welding (GMA 용접에서 용접선 추적용 시각센서의 화상처리에 관한 연구)

  • 정규철;김재웅
    • Journal of Welding and Joining
    • /
    • v.18 no.3
    • /
    • pp.60-67
    • /
    • 2000
  • In this study, we constructed a preview-sensing visual sensor system for weld seam tracking in GMA welding. The visual sensor consists of a CCD camera, a diode laser system with a cylindrical lens and a band-pass-filter to overcome the degrading of image due to spatters and/or arc light. To obtain weld joint position and edge points accurately from the captured image, we compared Hough transform method with central difference method. As a result, we present Hough transform method can more accurately extract the points and it can be applied to real time weld seam tracking. Image processing is carried out to extract straight lines that express laser stripe. After extracting the lines, weld joint position and edge points is determined by intersecting points of the lines. Although a spatter trace is in the image, it is possible to recognize the position of weld joint. Weld seam tracking was precisely implemented with adopting Hough transform method, and it is possible to track the weld seam in the case of offset angle is in the region of $\pm15^{\circ}$.

  • PDF

A Study on the Image Processing of Visual Sensor for Weld Seam Tracking in GMA Welding

  • Kim, J.-W.;Chung, K.-C.
    • International Journal of Korean Welding Society
    • /
    • v.1 no.2
    • /
    • pp.23-29
    • /
    • 2001
  • In this study, a preview-sensing visual sensor system is constructed far weld seam tracking in GMA welding. The visual sensor system consists of a CCD camera, a diode laser system with a cylindrical lens, and a band-pass-filter to overcome the degrading of image due to spatters and/or arc light. Among the image processing methods, Hough transform method is compared with the central difference method from a viewpoint of the capability for extracting the accurate feature position. As a result, it was revealed that Hough transform method can more accurately extract the feature positions and it can be applied to real time weld seam tracking. Image processing which includes Hough transform method is carried out to extract straight lines that express laser stripe. After extracting the lines, weld joint position and edge points are determined by intersecting the lines. Even though the image includes a spatter trace on it, it is possible to recognize the position of weld joint. Weld seam tracking was precisely implemented with adopting Hough transform method, and it is possible to track the weld seam in the case of offset angle is in the region of $\pm$ $15^{\circ}$.

  • PDF

Adaptable Center Detection of a Laser Line with a Normalization Approach using Hessian-matrix Eigenvalues

  • Xu, Guan;Sun, Lina;Li, Xiaotao;Su, Jian;Hao, Zhaobing;Lu, Xue
    • Journal of the Optical Society of Korea
    • /
    • v.18 no.4
    • /
    • pp.317-329
    • /
    • 2014
  • In vision measurement systems based on structured light, the key point of detection precision is to determine accurately the central position of the projected laser line in the image. The purpose of this research is to extract laser line centers based on a decision function generated to distinguish the real centers from candidate points with a high recognition rate. First, preprocessing of an image adopting a difference image method is conducted to realize image segmentation of the laser line. Second, the feature points in an integral pixel level are selected as the initiating light line centers by the eigenvalues of the Hessian matrix. Third, according to the light intensity distribution of a laser line obeying a Gaussian distribution in transverse section and a constant distribution in longitudinal section, a normalized model of Hessian matrix eigenvalues for the candidate centers of the laser line is presented to balance reasonably the two eigenvalues that indicate the variation tendencies of the second-order partial derivatives of the Gaussian function and constant function, respectively. The proposed model integrates a Gaussian recognition function and a sinusoidal recognition function. The Gaussian recognition function estimates the characteristic that one eigenvalue approaches zero, and enhances the sensitivity of the decision function to that characteristic, which corresponds to the longitudinal direction of the laser line. The sinusoidal recognition function evaluates the feature that the other eigenvalue is negative with a large absolute value, making the decision function more sensitive to that feature, which is related to the transverse direction of the laser line. In the proposed model the decision function is weighted for higher values to the real centers synthetically, considering the properties in the longitudinal and transverse directions of the laser line. Moreover, this method provides a decision value from 0 to 1 for arbitrary candidate centers, which yields a normalized measure for different laser lines in different images. The normalized results of pixels close to 1 are determined to be the real centers by progressive scanning of the image columns. Finally, the zero point of a second-order Taylor expansion in the eigenvector's direction is employed to refine further the extraction results of the central points at the subpixel level. The experimental results show that the method based on this normalization model accurately extracts the coordinates of laser line centers and obtains a higher recognition rate in two group experiments.

3D Environment Perception using Stereo Infrared Light Sources and a Camera (스테레오 적외선 조명 및 단일카메라를 이용한 3차원 환경인지)

  • Lee, Soo-Yong;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.519-524
    • /
    • 2009
  • This paper describes a new sensor system for 3D environment perception using stereo structured infrared light sources and a camera. Environment and obstacle sensing is the key issue for mobile robot localization and navigation. Laser scanners and infrared scanners cover $180^{\circ}$ and are accurate but too expensive. Those sensors use rotating light beams so that the range measurements are constrained on a plane. 3D measurements are much more useful in many ways for obstacle detection, map building and localization. Stereo vision is very common way of getting the depth information of 3D environment. However, it requires that the correspondence should be clearly identified and it also heavily depends on the light condition of the environment. Instead of using stereo camera, monocular camera and two projected infrared light sources are used in order to reduce the effects of the ambient light while getting 3D depth map. Modeling of the projected light pattern enabled precise estimation of the range. Two successive captures of the image with left and right infrared light projection provide several benefits, which include wider area of depth measurement, higher spatial resolution and the visibility perception.

System for Measuring the Welding Profile Using Vision and Structured Light (비전센서와 구조화빔을 이용한 용접 형상 측정 시스템)

  • Kim, Chang-Hyeon;Choe, Tae-Yong;Lee, Ju-Jang;Seo, Jeong;Park, Gyeong-Taek;Gang, Hui-Sin
    • Proceedings of the Korean Society of Laser Processing Conference
    • /
    • 2005.11a
    • /
    • pp.50-56
    • /
    • 2005
  • The robot systems are widely used in the many industrial field as well as welding manufacturing. The essential tasks to operate the welding robot are the acquisition of the position and/or shape of the parent metal. For the seam tracking or the robot tracking, many kinds of contact and non-contact sensors are used. Recently, the vision is most popular. In this paper, the development of the system which measures the shape of the welding part is described. This system uses the line-type structured laser diode and the vision sensor. It includes the correction of radial distortion which is often found in the image taken by the camera with short focal length. The Direct Linear Transformation (DLT) method is used for the camera calibration. The three dimensional shape of the parent metal is obtained after simple linear transformation. Some demos are shown to describe the performance of the developed system.

  • PDF