• Title/Summary/Keyword: Laser structured light

Search Result 74, Processing Time 0.029 seconds

Implementation of vision system for a mobile robot using pulse phase difference & structured light (펄스 위상차와 스트럭춰드 라이트를 이용한 이동 로봇 시각 장치 구현)

  • 방석원;정명진;서일홍;오상록
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10a
    • /
    • pp.652-657
    • /
    • 1991
  • Up to date, application areas of mobile robots have been expanded. In addition, Many types of LRF(Laser Range Finder) systems have been developed to acquire three dimensional information about unknown environments. However in real world, because of various noises (sunlight, fluorescent light), it is difficult to separate reflected laser light from these noise. To overcome the previous restriction, we have developed a new type vision system which enables a mobile robot to measure the distance to a object located 1-5 (m) ahead with an error than 2%. The separation and detection algorithm used in this system consists of pulse phase difference method and multi-stripe structured light. The effectiveness and feasibility of the proposed vision system are demonstrated by 3-D maps of detected objects and computation time analysis.

  • PDF

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

Fusion of Sonar and Laser Sensor for Mobile Robot Environment Recognition

  • Kim, Kyung-Hoon;Cho, Hyung-Suck
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.91.3-91
    • /
    • 2001
  • A sensor fusion scheme for mobile robot environment recognition that incorporates range data and contour data is proposed. Ultrasonic sensor provides coarse spatial description but guarantees open space with no obstacle within sonic cone with relatively high belief. Laser structured light system provides detailed contour description of environment but prone to light noise and is easily affected by surface reflectivity. Overall fusion process is composed of two stages: Noise elimination and belief updates. Dempster Shafer´s evidential reasoning is applied at each stage. Open space estimation from sonar range measurements brings elimination of noisy lines from laser sensor. Comparing actual sonar data to the simulated sonar data enables ...

  • PDF

Multi-facet 3D Scanner Based on Stripe Laser Light Image (선형 레이저 광 영상기반 다면 3 차원 스캐너)

  • Ko, Young-Jun;Yi, Soo-Yeong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.10
    • /
    • pp.811-816
    • /
    • 2016
  • In light of recently developed 3D printers for rapid prototyping, there is increasing attention on the 3D scanner as a 3D data acquisition system for an existing object. This paper presents a prototypical 3D scanner based on a striped laser light image. In order to solve the problem of shadowy areas, the proposed 3D scanner has two cameras with one laser light source. By using a horizontal rotation table and a rotational arm rotating about the latitudinal axis, the scanner is able to scan in all directions. To remove an additional optical filter for laser light pixel extraction of an image, we have adopted a differential image method with laser light modulation. Experimental results show that the scanner's 3D data acquisition performance exhibited less than 0.2 mm of measurement error. Therefore, this scanner has proven that it is possible to reconstruct an object's 3D surface from point cloud data using a 3D scanner, enabling reproduction of the object using a commercially available 3D printer.

Neural Network Based Camera Calibration and 2-D Range Finding (신경회로망을 이용한 카메라 교정과 2차원 거리 측정에 관한 연구)

  • 정우태;고국원;조형석
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1994.10a
    • /
    • pp.510-514
    • /
    • 1994
  • This paper deals with an application of neural network to camera calibration with wide angle lens and 2-D range finding. Wide angle lens has an advantage of having wide view angles for mobile environment recognition ans robot eye in hand system. But, it has severe radial distortion. Multilayer neural network is used for the calibration of the camera considering lens distortion, and is trained it by error back-propagation method. MLP can map between camera image plane and plane the made by structured light. In experiments, Calibration of camers was executed with calibration chart which was printed by using laser printer with 300 d.p.i. resolution. High distortion lens, COSMICAR 4.2mm, was used to see whether the neural network could effectively calibrate camera distortion. 2-D range of several objects well be measured with laser range finding system composed of camera, frame grabber and laser structured light. The performance of 3-D range finding system was evaluated through experiments and analysis of the results.

  • PDF

Robot Target Tracking Method using a Structured Laser Beam (레이저 구조광을 이용한 로봇 목표 추적 방법)

  • Kim, Jong Hyeong;Koh, Kyung-Chul
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.12
    • /
    • pp.1067-1071
    • /
    • 2013
  • A 3D visual sensing method using a laser structured beam is presented for robotic tracking applications in a simple and reliable manner. A cylindrical shaped laser structured beam is proposed to measure the pose and position of the target surface. When the proposed laser beam intersects on the surface along the target trajectory, an elliptic pattern is generated. Its ellipse parameters can be induced mathematically by the geometrical relationship of the sensor coordinate and target coordinate. The depth and orientation of the target surface are directly determined by the ellipse parameters. In particular, two discontinuous points on the ellipse pattern, induced by seam trajectory, indicate mathematically the 3D direction for robotic tracking. To investigate the performance of this method, experiments with a 6 axis robot system are conducted on two different types of seam trajectories. The results show that this method is very suitable for robot seam tracking applications due to its excellence in accuracy and efficiency.

Rapid Fabrication of Micro-nano Structured Thin Film for Water Droplet Separation using 355nm UV Laser Ablation (355 nm UV 레이저 어블레이션을 이용한 마이크로-나노 구조의 액적 분리용 박막 필터 쾌속 제작)

  • Shin, Bo-Sung
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.29 no.7
    • /
    • pp.799-804
    • /
    • 2012
  • Recently micro-nano structures has widely been reported to improve the performance of waterproof, heat isolation, sound and light absorption in various fields of electric devices such as mobiles, battery, display and solar panels. A lot of micro-sized holes on the surface of thin film provide excellent sound, or heat, or light transmission efficiency more than solid film and simultaneously nano-sized protrusions around micro hole increase the hydrophobicity of the surface of thin film because of lotus leaf effects as generally known previously. In this paper new rapid fabrication process with 355 nm UV laser ablation was proposed to get micro-nano structures on the surface of thin film, which have only been observed at higher laser fluence. Developed thin micro-nano structured film was also investigated the hydrophobic property by measuring the contact angle and demonstrated the possibility to apply to water droplet separation.

Depth Measurement System Using Structured Light, Rotational Plane Mirror and Mono-Camera (선형 레이저와 회전 평면경 및 단일 카메라를 이용한 거리측정 시스템)

  • Yoon Chang-Bae;Kim Hyong-Suk;Lin Chun-Shin;Son Hong-Rak;Lee Hye-Jeong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.5
    • /
    • pp.406-410
    • /
    • 2005
  • A depth measurement system that consists of a single camera, a laser light source and a rotating mirror is investigated. The camera and the light source are fixed, facing the rotating mirror. The laser light is reflected by the mirror and projected to the scene objects whose locations are to be determined. The camera detects the laser light location on object surfaces through the same mirror. The scan over the area to be measured is done by mirror rotation. Advantages are 1) the image of the light stripe remains sharp while that of the background becomes blurred because of the mirror rotation and 2) the only rotating part of this system is the mirror but the mirror angle is not involved in depth computation. This minimizes the imprecision caused by a possible inaccurate angle measurement. The detail arrangement and experimental results are reported.

Inspection Algorithm for Screw Head Forming Punch Using Based on Machine Vision (머신비전을 이용한 나사 머리 성형 펀치의 검사 알고리즘)

  • Jeong, Ku Hyeon;Chung, Seong Youb
    • Journal of Institute of Convergence Technology
    • /
    • v.3 no.2
    • /
    • pp.31-37
    • /
    • 2013
  • This paper proposes a vision-based inspection algorithm for a punch which is used when forming the head of the small screws. To maintain good quality of punch, the precise inspection of its dimension and the depth of the punch head is important. A CCD camera and an illumination dome light are used to measure its dimensions. And a structured line laser is also used to measure the depth of the punch head. Resolution and visible area depend on setup between laser and camera which is determined using CAD-based simulation. The proposed method is successfully evaluated using experiment on #2 punch.

  • PDF