• Title/Summary/Keyword: Structured light vision system

Search Result 30, Processing Time 0.025 seconds

Calibration of Structured Light Vision System using Multiple Vertical Planes

  • Ha, Jong Eun
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.1
    • /
    • pp.438-444
    • /
    • 2018
  • Structured light vision system has been widely used in 3D surface profiling. Usually, it is composed of a camera and a laser which projects a line on the target. Calibration is necessary to acquire 3D information using structured light stripe vision system. Conventional calibration algorithms have found the pose of the camera and the equation of the stripe plane of the laser under the same coordinate system of the camera. Therefore, the 3D reconstruction is only possible under the camera frame. In most cases, this is sufficient to fulfill given tasks. However, they require multiple images which are acquired under different poses for calibration. In this paper, we propose a calibration algorithm that could work by using just one shot. Also, proposed algorithm could give 3D reconstruction under both the camera and laser frame. This would be done by using newly designed calibration structure which has multiple vertical planes on the ground plane. The ability to have 3D reconstruction under both the camera and laser frame would give more flexibility for its applications. Also, proposed algorithm gives an improvement in the accuracy of 3D reconstruction.

Three Dimensional Geometric Feature Detection Using Computer Vision System and Laser Structured Light (컴퓨터 시각과 레이저 구조광을 이용한 물체의 3차원 정보 추출)

  • Hwang, H.;Chang, Y.C.;Im, D.H.
    • Journal of Biosystems Engineering
    • /
    • v.23 no.4
    • /
    • pp.381-390
    • /
    • 1998
  • An algorithm to extract the 3-D geometric information of a static object was developed using a set of 2-D computer vision system and a laser structured lighting device. As a structured light pattern, multi-parallel lines were used in the study. The proposed algorithm was composed of three stages. The camera calibration, which determined a coordinate transformation between the image plane and the real 3-D world, was performed using known 6 pairs of points at the first stage. Then, utilizing the shifting phenomena of the projected laser beam on an object, the height of the object was computed at the second stage. Finally, using the height information of the 2-D image point, the corresponding 3-D information was computed using results of the camera calibration. For arbitrary geometric objects, the maximum error of the extracted 3-D feature using the proposed algorithm was less than 1~2mm. The results showed that the proposed algorithm was accurate for 3-D geometric feature detection of an object.

  • PDF

Localization of Mobile Robot Using Active Omni-directional Ranging System (능동 전방향 거리 측정 시스템을 이용한 이동로봇의 위치 추정)

  • Ryu, Ji-Hyung;Kim, Jin-Won;Yi, Soo-Yeong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.5
    • /
    • pp.483-488
    • /
    • 2008
  • An active omni-directional raging system using an omni-directional vision with structured light has many advantages compared to the conventional ranging systems: robustness against external illumination noise because of the laser structured light and computational efficiency because of one shot image containing $360^{\circ}$ environment information from the omni-directional vision. The omni-directional range data represents a local distance map at a certain position in the workspace. In this paper, we propose a matching algorithm for the local distance map with the given global map database, thereby to localize a mobile robot in the global workspace. Since the global map database consists of line segments representing edges of environment object in general, the matching algorithm is based on relative position and orientation of line segments in the local map and the global map. The effectiveness of the proposed omni-directional ranging system and the matching are verified through experiments.

A Novel Robot Sensor System Utilizing the Combination Of Stereo Image Intensity And Laser Structured Light Image Information

  • Lee, Hyun-Ki;Xingyong, Song;Kim, Min-Young;Cho, Hyung-Suck
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.729-734
    • /
    • 2005
  • One of the important research issues in mobile robot is how to detect the 3D environment fast and accurately, and recognize it. Sensing methods of utilizing laser structured light and/or stereo vision are representatively used among a number of methodologies developed to date. However, the methods are still in need of achieving high accuracy and reliability to be used for real world environments. In this paper to implement a new robotic environmental sensing algorithm is presented by combining the information between intensity image and that of laser structured light image. To see how effectively the algorithm applied to real environments, we developed a sensor system that can be mounted on a mobile robot and tested performance for a series of environments.

  • PDF

Implementation of vision system for a mobile robot using pulse phase difference & structured light (펄스 위상차와 스트럭춰드 라이트를 이용한 이동 로봇 시각 장치 구현)

  • 방석원;정명진;서일홍;오상록
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10a
    • /
    • pp.652-657
    • /
    • 1991
  • Up to date, application areas of mobile robots have been expanded. In addition, Many types of LRF(Laser Range Finder) systems have been developed to acquire three dimensional information about unknown environments. However in real world, because of various noises (sunlight, fluorescent light), it is difficult to separate reflected laser light from these noise. To overcome the previous restriction, we have developed a new type vision system which enables a mobile robot to measure the distance to a object located 1-5 (m) ahead with an error than 2%. The separation and detection algorithm used in this system consists of pulse phase difference method and multi-stripe structured light. The effectiveness and feasibility of the proposed vision system are demonstrated by 3-D maps of detected objects and computation time analysis.

  • PDF

A study on vision seam tracking system at lap joints (겹치기이음에서 용접선 시각 추적 시스템에 관한 연구)

  • 신정식;김재웅;나석주;최칠룡
    • Journal of Welding and Joining
    • /
    • v.9 no.2
    • /
    • pp.20-28
    • /
    • 1991
  • The main subject of this study is the construction of an automatic welding system that has the capability to trace the weld seam in GMA welding of lap joints. The system was composed of a vision sensor, moving torch, and personal computer(IBM-PC). In the developed vision sensor, an image was captured by the frame grabber at the time of short circuit during welding. The threshold method was adopted for determining the structured light and the central difference method for detecting the weld joint. And the seam tracing of the torch was performed by using the data regeneration algorithm. In this system using the image at the time of short circuit, weld seam tracking was performed without any relations to arc light and spatters.

  • PDF

Real-time Omni-directional Distance Measurement with Active Panoramic Vision

  • Yi, Soo-Yeong;Choi, Byoung-Wook;Ahuja, Narendra
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.2
    • /
    • pp.184-191
    • /
    • 2007
  • Autonomous navigation of mobile robot requires a ranging system for measurement of distance to environmental objects. It is obvious that the wider and the faster distance measurement gives a mobile robot more freedom in trajectory planning and control. The active omni-directional ranging system proposed in this paper is capable of obtaining the distance for all 3600 directions in real-time because of the omni-directional mirror and the structured light. Distance computation including the sensitivity analysis and the experiments on the omni-directional ranging are presented to verify the effectiveness of the proposed system.

Depth Evaluation from Pattern Projection Optimized for Automated Electronics Assembling Robots

  • Park, Jong-Rul;Cho, Jun Dong
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.3 no.4
    • /
    • pp.195-204
    • /
    • 2014
  • This paper presents the depth evaluation for object detection by automated assembling robots. Pattern distortion analysis from a structured light system identifies an object with the greatest depth from its background. An automated assembling robot should prior select and pick an object with the greatest depth to reduce the physical harm during the picking action of the robot arm. Object detection is then combined with a depth evaluation to provide contour, showing the edges of an object with the greatest depth. The contour provides shape information to an automated assembling robot, which equips the laser based proxy sensor, for picking up and placing an object in the intended place. The depth evaluation process using structured light for an automated electronics assembling robot is accelerated for an image frame to be used for computation using the simplest experimental set, which consists of a single camera and projector. The experiments for the depth evaluation process required 31 ms to 32 ms, which were optimized for the robot vision system that equips a 30-frames-per-second camera.

Creepage Distance Measurement Using Binocular Stereo Vision on Hot-line for High Voltage Insulator

  • He, Wenjun;Wang, Jiake;Fu, Yuegang
    • Current Optics and Photonics
    • /
    • v.2 no.4
    • /
    • pp.348-355
    • /
    • 2018
  • How to measure the creepage distance of an insulator quickly and accurately is a problem for the power industry at present, and the noticeable concern is that the high voltage insulation equipment cannot be measured online in the charged state. In view of this situation, we develop an on-line measurement system of creepage distance for high voltage insulators based on binocular stereo vision. We have proposed a method of generating linear structured light using a conical off-axis mirror. The feasibility and effect of two ways to solve the interference problem of strong sunlight have been discussed, one way is to use bandpass filters to enhance the contrast ratio of linear structured light in the images, and the other way is to process the images with adaptive threshold segmentation and feature point extraction. After the system is calibrated, we tested the measurement error of the on-line measurement system with a composite insulator sample. Experimental results show that the maximum relative error is 1.45% and the average relative error is 0.69%, which satisfies the task requirement of not more than 5% of the maximum relative error.

Development of a vision sensor for measuring the weld groove parameters in arc welding process (자동 아크 용접공정의 용접개선변수 측정을 위한 시각 시스템)

  • 김호학;부광석;조형석
    • Journal of Welding and Joining
    • /
    • v.8 no.2
    • /
    • pp.58-69
    • /
    • 1990
  • In conventional arc welding, position error of the weld torch with respect to the weld seam and variation of groove dimension are induced by inaccurate fitup and fixturing. In this study, a vision system has been developed to recognize and compensate the position error and dimensional inaccuracy. The system uses a structured laser light illuminated on the weld groove and perceived by a C.C.D camera. A new algorithm to detect the edge of the reflected laser light is introduced for real time processing. The developed system was applied to arbitarary weld paths with various types of joint in arc welding process. The experimental results show that the proposed system can detect the weld groove parameters within good accuracy and yield good tracking performance.

  • PDF