• Title/Summary/Keyword: Laser-Vision Fusion Sensor

Search Result 17, Processing Time 0.025 seconds

Information Fusion of Cameras and Laser Radars for Perception Systems of Autonomous Vehicles (영상 및 레이저레이더 정보융합을 통한 자율주행자동차의 주행환경인식 및 추적방법)

  • Lee, Minchae;Han, Jaehyun;Jang, Chulhoon;Sunwoo, Myoungho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.1
    • /
    • pp.35-45
    • /
    • 2013
  • A autonomous vehicle requires improved and robust perception systems than conventional perception systems of intelligent vehicles. In particular, single sensor based perception systems have been widely studied by using cameras and laser radar sensors which are the most representative sensors for perception by providing object information such as distance information and object features. The distance information of the laser radar sensor is used for road environment perception of road structures, vehicles, and pedestrians. The image information of the camera is used for visual recognition such as lanes, crosswalks, and traffic signs. However, single sensor based perception systems suffer from false positives and true negatives which are caused by sensor limitations and road environments. Accordingly, information fusion systems are essentially required to ensure the robustness and stability of perception systems in harsh environments. This paper describes a perception system for autonomous vehicles, which performs information fusion to recognize road environments. Particularly, vision and laser radar sensors are fused together to detect lanes, crosswalks, and obstacles. The proposed perception system was validated on various roads and environmental conditions with an autonomous vehicle.

Road Recognition based Extended Kalman Filter with Multi-Camera and LRF (다중카메라와 레이저스캐너를 이용한 확장칼만필터 기반의 노면인식방법)

  • Byun, Jae-Min;Cho, Yong-Suk;Kim, Sung-Hoon
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.2
    • /
    • pp.182-188
    • /
    • 2011
  • This paper describes a method of road tracking by using a vision and laser with extracting road boundary (road lane and curb) for navigation of intelligent transport robot in structured road environments. Road boundary information plays a major role in developing such intelligent robot. For global navigation, we use a global positioning system achieved by means of a global planner and local navigation accomplished with recognizing road lane and curb which is road boundary on the road and estimating the location of lane and curb from the current robot with EKF(Extended Kalman Filter) algorithm in the road assumed that it has prior information. The complete system has been tested on the electronic vehicles which is equipped with cameras, lasers, GPS. Experimental results are presented to demonstrate the effectiveness of the combined laser and vision system by our approach for detecting the curb of road and lane boundary detection.

The Weld Defects Expression Method by the Concept of Segment Splitting Method and Mean Distance (분할법과 평균거리 개념에 의한 용접 결함 표현 방법)

  • Lee, Jeong-Ick;Koh, Byung-Kab
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.16 no.2
    • /
    • pp.37-43
    • /
    • 2007
  • In this paper, laser vision sensor is used to detect some defects any $co_{2}$ welded specimen in hardware. But, as the best expression of defects of welded specimen, the concept of segment splitting method and mean distance are introduced in software. The developed GUI software is used for deriding whether any welded specimen makes as proper shape or detects in real time. The criteria are based upon ISO 5817 as limits of imperfections in metallic fusion welds.

Human Legs Stride Recognition and Tracking based on the Laser Scanner Sensor Data (레이저센서 데이터융합기반의 복수 휴먼보폭 인식과 추적)

  • Jin, Taeseok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.3
    • /
    • pp.247-253
    • /
    • 2019
  • In this paper, we present a new method for real-time tracking of human walking around a laser sensor system. The method converts range data with $r-{\theta}$ coordinates to a 2D image with x-y coordinates. Then human tracking is performed using human's features, i.e. appearances of human walking pattern, and the input range data. The laser sensor based human tracking method has the advantage of simplicity over conventional methods which extract human face in the vision data. In our method, the problem of estimating 2D positions and orientations of two walking human's ankle level is formulated based on a moving trajectory algorithm. In addition, the proposed tracking system employs a HMM to robustly track human in case of occlusions. Experimental results using a real system demonstrate usefulness of the proposed method.

Design of range measurement systems using a sonar and a camera (초음파 센서와 카메라를 이용한 거리측정 시스템 설계)

  • Moon, Chang-Soo;Do, Yong-Tae
    • Journal of Sensor Science and Technology
    • /
    • v.14 no.2
    • /
    • pp.116-124
    • /
    • 2005
  • In this paper range measurement systems are designed using an ultrasonic sensor and a camera. An ultrasonic sensor provides the range measurement to a target quickly and simply but its low resolution is a disadvantage. We tackle this problem by employing a camera. Instead using a stereoscopic sensor, which is widely used for 3D sensing but requires a computationally intensive stereo matching, the range is measured by focusing and structured lighting. In focusing a straightforward focusing measure named as MMDH(min-max difference in histogram) is proposed and compared with existing techniques. In the method of structure lighting, light stripes projected by a beam projector are used. Compared to those using a laser beam projector, the designed system can be constructed easily in a low-budget. The system equation is derived by analysing the sensor geometry. A sensing scenario using the systems designed is in two steps. First, when better accuracy is required, measurements by ultrasonic sensing and focusing of a camera are fused by MLE(maximum likelihood estimation). Second, when the target is in a range of particular interest, a range map of the target scene is obtained by using structured lighting technique. The systems designed showed measurement accuracy up to 0.3[mm] approximately in experiments.

Audio-Visual Fusion for Sound Source Localization and Improved Attention (음성-영상 융합 음원 방향 추정 및 사람 찾기 기술)

  • Lee, Byoung-Gi;Choi, Jong-Suk;Yoon, Sang-Suk;Choi, Mun-Taek;Kim, Mun-Sang;Kim, Dai-Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.35 no.7
    • /
    • pp.737-743
    • /
    • 2011
  • Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection.

A Short-term Dynamic Displacement Estimation Method for Civil Infrastructures (사회기반 건설구조물의 단기 동적변위 산정기법)

  • Choi, Jaemook;Chung, Junyeon;Koo, Gunhee;Kim, Kiyoung;Sohn, Hoon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.30 no.3
    • /
    • pp.249-254
    • /
    • 2017
  • The paper presents a new short-term dynamic displacement estimation method based on an acceleration and a geophone sensor. The proposed method combines acceleration and velocity measurements through a real time data fusion algorithm based on Kalman filter. The proposed method can estimate the displacement of a structure without displacement sensors, which is typically difficult to be applied to earthquake or fire sites due to their requirement of a fixed rigid support. The proposed method double-integrates the acceleration measurement recursively, and corrects an accumulated integration error based on the velocity measurement, The performance of the proposed method was verified by a lab-scale test, in which displacement estimated by the proposed method are compared to a reference displacement measured by laser doppler vibrometer (LDV).