• Title/Summary/Keyword: Laser-Vision Fusion Sensor

Search Result 17, Processing Time 0.027 seconds

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

Refinements of Multi-sensor based 3D Reconstruction using a Multi-sensor Fusion Disparity Map (다중센서 융합 상이 지도를 통한 다중센서 기반 3차원 복원 결과 개선)

  • Kim, Si-Jong;An, Kwang-Ho;Sung, Chang-Hun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.298-304
    • /
    • 2009
  • This paper describes an algorithm that improves 3D reconstruction result using a multi-sensor fusion disparity map. We can project LRF (Laser Range Finder) 3D points onto image pixel coordinatesusing extrinsic calibration matrixes of a camera-LRF (${\Phi}$, ${\Delta}$) and a camera calibration matrix (K). The LRF disparity map can be generated by interpolating projected LRF points. In the stereo reconstruction, we can compensate invalid points caused by repeated pattern and textureless region using the LRF disparity map. The result disparity map of compensation process is the multi-sensor fusion disparity map. We can refine the multi-sensor 3D reconstruction based on stereo vision and LRF using the multi-sensor fusion disparity map. The refinement algorithm of multi-sensor based 3D reconstruction is specified in four subsections dealing with virtual LRF stereo image generation, LRF disparity map generation, multi-sensor fusion disparity map generation, and 3D reconstruction process. It has been tested by synchronized stereo image pair and LRF 3D scan data.

  • PDF

Autonomous Robot Kinematic Calibration using a Laser-Vision Sensor (레이저-비전 센서를 이용한 Autonomous Robot Kinematic Calibration)

  • Jeong, Jeong-Woo;Kang, Hee-Jun
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.2 s.95
    • /
    • pp.176-182
    • /
    • 1999
  • This paper presents a new autonomous kinematic calibration technique by using a laser-vision sensor called "Perceptron TriCam Contour". Because the sensor measures by capturing the image of a projected laser line on the surface of the object, we set up a long, straight line of a very fine string inside the robot workspace, and then allow the sensor mounted on a robot to measure the point intersection of the line of string and the projected laser line. The point data collected by changing robot configuration and sensor measuring are constrained to on a single straght line such that the closed-loop calibration method can be applied. The obtained calibration method is simple and accurate and also suitable for on-site calibration in an industrial environment. The method is implemented using Hyundai VORG-35 for its effectiveness.

  • PDF

Cylindrical Object Recognition using Sensor Data Fusion (센서데이터 융합을 이용한 원주형 물체인식)

  • Kim, Dong-Gi;Yun, Gwang-Ik;Yun, Ji-Seop;Gang, Lee-Seok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.8
    • /
    • pp.656-663
    • /
    • 2001
  • This paper presents a sensor fusion method to recognize a cylindrical object a CCD camera, a laser slit beam and ultrasonic sensors on a pan/tilt device. For object recognition with a vision sensor, an active light source projects a stripe pattern of light on the object surface. The 2D image data are transformed into 3D data using the geometry between the camera and the laser slit beam. The ultrasonic sensor uses an ultrasonic transducer array mounted in horizontal direction on the pan/tilt device. The time of flight is estimated by finding the maximum correlation between the received ultrasonic pulse and a set of stored templates - also called a matched filter. The distance of flight is calculated by simply multiplying the time of flight by the speed of sound and the maximum amplitude of the filtered signal is used to determine the face angle to the object. To determine the position and the radius of cylindrical objects, we use a statistical sensor fusion. Experimental results show that the fused data increase the reliability for the object recognition.

  • PDF

A Study of Inspection of Weld Bead Defects using Laser Vision Sensor (레이저 비전 센서를 이용한 용접비드의 외부결함 검출에 관한 연구)

  • 이정익;이세헌
    • Journal of Welding and Joining
    • /
    • v.17 no.2
    • /
    • pp.53-60
    • /
    • 1999
  • Conventionally, CCD camera and vision sensor using the projected pattern of light is generally used to inspect the weld bead defects. But with this method, a lot of time is needed for image preprocessing, stripe extraction and thinning, etc. In this study, laser vision sensor using the scanning beam of light is used to shorten the time required for image preprocessing. The software for deciding whether the weld bead is in proper shape or not in real time is developed. The criteria are based upon the classification of imperfections in metallic fusion welds(ISO 6520) and limits for imperfections(ISO 5817).

  • PDF

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

A Study on the Sensor Fusion Method to Improve Localization of a Mobile Robot (이동로봇의 위치추정 성능개선을 위한 센서융합기법에 관한 연구)

  • Jang, Chul-Woong;Jung, Ki-Ho;Kong, Jung-Shik;Jang, Mun-Suk;Kwon, Oh-Sang;Lee, Eung-Hyuk
    • Proceedings of the KIEE Conference
    • /
    • 2007.10a
    • /
    • pp.317-318
    • /
    • 2007
  • One of the important factors of the autonomous mobile robot is to build a map for surround environment and estimate its localization. This paper suggests a sensor fusion method of laser range finder and monocular vision sensor for the simultaneous localization and map building. The robot observes the comer points in the environment as features using the laser range finder, and extracts the SIFT algorithm with the monocular vision sensor. We verify the improved localization performance of the mobile robot from the experiment.

  • PDF

Radar, Vision, Lidar Fusion-based Environment Sensor Fault Detection Algorithm for Automated Vehicles (레이더, 비전, 라이더 융합 기반 자율주행 환경 인지 센서 고장 진단)

  • Choi, Seungrhi;Jeong, Yonghwan;Lee, Myungsu;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.9 no.4
    • /
    • pp.32-37
    • /
    • 2017
  • For automated vehicles, the integrity and fault tolerance of environment perception sensor have been an important issue. This paper presents radar, vision, lidar(laser radar) fusion-based fault detection algorithm for autonomous vehicles. In this paper, characteristics of each sensor are shown. And the error of states of moving targets estimated by each sensor is analyzed to present the method to detect fault of environment sensors by characteristic of this error. Each estimation of moving targets isperformed by EKF/IMM method. To guarantee the reliability of fault detection algorithm of environment sensor, various driving data in several types of road is analyzed.

Sensor Fusion-Based Semantic Map Building (센서융합을 통한 시맨틱 지도의 작성)

  • Park, Joong-Tae;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.3
    • /
    • pp.277-282
    • /
    • 2011
  • This paper describes a sensor fusion-based semantic map building which can improve the capabilities of a mobile robot in various domains including localization, path-planning and mapping. To build a semantic map, various environmental information, such as doors and cliff areas, should be extracted autonomously. Therefore, we propose a method to detect doors, cliff areas and robust visual features using a laser scanner and a vision sensor. The GHT (General Hough Transform) based recognition of door handles and the geometrical features of a door are used to detect doors. To detect the cliff area and robust visual features, the tilting laser scanner and SIFT features are used, respectively. The proposed method was verified by various experiments and showed that the robot could build a semantic map autonomously in various indoor environments.

Autonomous Navigation of KUVE (KIST Unmanned Vehicle Electric) (KUVE (KIST 무인 주행 전기 자동차)의 자율 주행)

  • Chun, Chang-Mook;Suh, Seung-Beum;Lee, Sang-Hoon;Roh, Chi-Won;Kang, Sung-Chul;Kang, Yeon-Sik
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.7
    • /
    • pp.617-624
    • /
    • 2010
  • This article describes the system architecture of KUVE (KIST Unmanned Vehicle Electric) and unmanned autonomous navigation of it in KIST. KUVE, which is an electric light-duty vehicle, is equipped with two laser range finders, a vision camera, a differential GPS system, an inertial measurement unit, odometers, and control computers for autonomous navigation. KUVE estimates and tracks the boundary of road such as curb and line using a laser range finder and a vision camera. It follows predetermined trajectory if there is no detectable boundary of road using the DGPS, IMU, and odometers. KUVE has over 80% of success rate of autonomous navigation in KIST.