• Title/Summary/Keyword: Sensor fusion

Search Result 821, Processing Time 0.022 seconds

A Study of Sensor Fusion using Radar Sensor and Vision Sensor in Moving Object Detection (레이더 센서와 비전 센서를 활용한 다중 센서 융합 기반 움직임 검지에 관한 연구)

  • Kim, Se Jin;Byun, Ki Hun;Won, In Su;Kwon, Jang Woo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.2
    • /
    • pp.140-152
    • /
    • 2017
  • This Paper is for A study of sensor fusion using Radar sensor and Vision sensor in moving object detection. Radar sensor has some problems to detect object. When the sensor moves by wind or that kind of thing, it can happen to detect wrong object like building or tress. And vision sensor is very useful for all area. And it is also used so much. but there are some weakness that is influenced easily by the light of the area, shaking of the sensor device, and weather and so on. So in this paper I want to suggest to fuse these sensor to detect object. Each sensor can fill the other's weakness, so this kind of sensor fusion makes object detection much powerful.

A Study of Observability Analysis and Data Fusion for Bias Estimation in a Multi-Radar System (다중 레이더 환경에서의 바이어스 오차 추정의 가관측성에 대한 연구와 정보 융합)

  • Won, Gun-Hee;Song, Taek-Lyul;Kim, Da-Sol;Seo, Il-Hwan;Hwang, Gyu-Hwan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.8
    • /
    • pp.783-789
    • /
    • 2011
  • Target tracking performance improvement using multi-sensor data fusion is a challenging work. However, biases in the measurements should be removed before various data fusion techniques are applied. In this paper, a bias removing algorithm using measurement data from multi-radar tracking systems is proposed and evaluated by computer simulation. To predict bias estimation performance in various geometric relations between the radar systems and target, a system observability index is proposed and tested via computer simulation results. It is also studied that target tracking which utilizes multi-sensor data fusion with bias-removed measurements results in better performance.

Development of the Driving path Estimation Algorithm for Adaptive Cruise Control System and Advanced Emergency Braking System Using Multi-sensor Fusion (ACC/AEBS 시스템용 센서퓨전을 통한 주행경로 추정 알고리즘)

  • Lee, Dongwoo;Yi, Kyongsu;Lee, Jaewan
    • Journal of Auto-vehicle Safety Association
    • /
    • v.3 no.2
    • /
    • pp.28-33
    • /
    • 2011
  • This paper presents driving path estimation algorithm for adaptive cruise control system and advanced emergency braking system using multi-sensor fusion. Through data collection, yaw rate filtering based road curvature and vision sensor road curvature characteristics are analyzed. Yaw rate filtering based road curvature and vision sensor road curvature are fused into the one curvature by weighting factor which are considering characteristics of each curvature data. The proposed driving path estimation algorithm has been investigated via simulation performed on a vehicle package Carsim and Matlab/Simulink. It has been shown via simulation that the proposed driving path estimation algorithm improves primary target detection rate.

Estimation of Train Position Using Sensor Fusion Technique (센서융합에 의한 열차위치 추정방법)

  • Yoon Hee-Sang;Park Tae-Hyoung;Yoon Yong-Gi;Hwang Jong-Gyu;Lee Jae-Ho
    • Journal of the Korean Society for Railway
    • /
    • v.8 no.2
    • /
    • pp.155-160
    • /
    • 2005
  • We propose a tram position estimation method for automatic train control system. The accurate train position should be continuously feedback to control system for safe and efficient operation of trains in railway. In this paper, we propose the sensor fusion method integrating a tachometer, a transponder, and a doppler sensor far estimation of train position. The external sensors(transponder, doppler sensor) are used to compensate for the error of internal sensor (tachometer). The Kalman filter is also applied to reduce the measurement error of the sensors. Simulation results are then presented to verify the usefulness of the proposed method.

Implementation of a Wireless Distributed Sensor Network Using Data Fusion Kalman-Consensus Filer (정보 융합 칼만-Consensus 필터를 이용한 분산 센서 네트워크 구현)

  • Song, Jae-Min;Ha, Chan-Sung;Whang, Ji-Hong;Kim, Tae-Hyo
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.4
    • /
    • pp.243-248
    • /
    • 2013
  • In wireless sensor networks, consensus algorithms for dynamic systems may flexibly usable for their data fusion of a sensor network. In this paper, a distributed data fusion filter is implemented using an average consensus based on distributed sensor data, which is composed of some sensor nodes and a sink node to track the mean values of n sensors' data. The consensus filter resolve the problem of data fusion by a distribution Kalman filtering scheme. We showed that the consensus filter has an optimal convergence to decrease of noise propagation and fast tracking ability for input signals. In order to verify for the results of consensus filtering, we showed the output signals of sensor nodes and their filtering results, and then showed the result of the combined signal and the consensus filtering using zeegbee communication.

Effect of Correcting Radiometric Inconsistency between Input Images on Spatio-temporal Fusion of Multi-sensor High-resolution Satellite Images (입력 영상의 방사학적 불일치 보정이 다중 센서 고해상도 위성영상의 시공간 융합에 미치는 영향)

  • Park, Soyeon;Na, Sang-il;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.999-1011
    • /
    • 2021
  • In spatio-temporal fusion aiming at predicting images with both high spatial and temporal resolutionsfrom multi-sensor images, the radiometric inconsistency between input multi-sensor images may affect prediction performance. This study investigates the effect of radiometric correction, which compensate different spectral responses of multi-sensor satellite images, on the spatio-temporal fusion results. The effect of relative radiometric correction of input images was quantitatively analyzed through the case studies using Sentinel-2, PlanetScope, and RapidEye images obtained from two croplands. Prediction performance was improved when radiometrically corrected multi-sensor images were used asinput. In particular, the improvement in prediction performance wassubstantial when the correlation between input images was relatively low. Prediction performance could be improved by transforming multi-sensor images with different spectral responses into images with similar spectral responses and high correlation. These results indicate that radiometric correction is required to improve prediction performance in spatio-temporal fusion of multi-sensor satellite images with low correlation.

Sensor Fusion System for Improving the Recognition Performance of 3D Object (3차원 물체의 인식 성능 향상을 위한 감각 융합 시스템)

  • Kim, Ji-Kyoung;Oh, Yeong-Jae;Chong, Kab-Sung;Wee, Jae-Woo;Lee, Chong-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.107-109
    • /
    • 2004
  • In this paper, authors propose the sensor fusion system that can recognize multiple 3D objects from 2D projection images and tactile information. The proposed system focuses on improving recognition performance of 3D object. Unlike the conventional object recognition system that uses image sensor alone, the proposed method uses tactual sensors in addition to visual sensor. Neural network is used to fuse these informations. Tactual signals are obtained from the reaction force by the pressure sensors at the fingertips when unknown objects are grasped by four-fingered robot hand. The experiment evaluates the recognition rate and the number of teaming iterations of various objects. The merits of the proposed systems are not only the high performance of the learning ability but also the reliability of the system with tactual information for recognizing various objects even though visual information has a defect. The experimental results show that the proposed system can improve recognition rate and reduce learning time. These results verify the effectiveness of the proposed sensor fusion system as recognition scheme of 3D object.

  • PDF

Design and Performance Analysis of Distributed Detection Systems with Two Passive Sonar Sensors (수동 소나 쌍을 이용한 분산탐지 체계의 설계 및 성능 분석)

  • Kim, Song-Geun;Do, Joo-Hwan;Song, Seung-Min;Hong, Sun-Mog;Kim, In-Ik;Oh, Won-Tchon
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.12 no.2
    • /
    • pp.159-169
    • /
    • 2009
  • In this paper, optimum design of distributed detection is considered for a parallel sensor network system consisting of a fusion center and two passive sonar nodes. AND rule and OR rule are employed as the fusion rules of the sensor network. For the fusion rules, it is shown that a threshold rule of each sensor node has uniformly most powerful properties. Optimum threshold for each sensor is investigated that maximizes the probability of detection under the constraint of a specified probability of false alarm. It is also investigated through numerical experiments how signal strength, false alarm probability, and the distance between two sensor nodes affect the system detection performances.

Neural Network Approach to Sensor Fusion System for Improving the Recognition Performance of 3D Objects (3차원 물체의 인식 성능 향상을 위한 감각 융합 신경망 시스템)

  • Dong Sung Soo;Lee Chong Ho;Kim Ji Kyoung
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.54 no.3
    • /
    • pp.156-165
    • /
    • 2005
  • Human being recognizes the physical world by integrating a great variety of sensory inputs, the information acquired by their own action, and their knowledge of the world using hierarchically parallel-distributed mechanism. In this paper, authors propose the sensor fusion system that can recognize multiple 3D objects from 2D projection images and tactile informations. The proposed system focuses on improving recognition performance of 3D objects. Unlike the conventional object recognition system that uses image sensor alone, the proposed method uses tactual sensors in addition to visual sensor. Neural network is used to fuse the two sensory signals. Tactual signals are obtained from the reaction force of the pressure sensors at the fingertips when unknown objects are grasped by four-fingered robot hand. The experiment evaluates the recognition rate and the number of learning iterations of various objects. The merits of the proposed systems are not only the high performance of the learning ability but also the reliability of the system with tactual information for recognizing various objects even though the visual sensory signals get defects. The experimental results show that the proposed system can improve recognition rate and reduce teeming time. These results verify the effectiveness of the proposed sensor fusion system as recognition scheme for 3D objects.

Control of Mobile Robot Navigation Using Vision Sensor Data Fusion by Nonlinear Transformation (비선형 변환의 비젼센서 데이터융합을 이용한 이동로봇 주행제어)

  • Jin Tae-Seok;Lee Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.4
    • /
    • pp.304-313
    • /
    • 2005
  • The robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, robot need to recognize his position and direction for intelligent performance in an unknown environment. And the mobile robots may navigate by means of a number of monitoring systems such as the sonar-sensing system or the visual-sensing system. Notice that in the conventional fusion schemes, the measurement is dependent on the current data sets only. Therefore, more of sensors are required to measure a certain physical parameter or to improve the accuracy of the measurement. However, in this research, instead of adding more sensors to the system, the temporal sequence of the data sets are stored and utilized for the accurate measurement. As a general approach of sensor fusion, a UT -Based Sensor Fusion(UTSF) scheme using Unscented Transformation(UT) is proposed for either joint or disjoint data structure and applied to the landmark identification for mobile robot navigation. Theoretical basis is illustrated by examples and the effectiveness is proved through the simulations and experiments. The newly proposed, UT-Based UTSF scheme is applied to the navigation of a mobile robot in an unstructured environment as well as structured environment, and its performance is verified by the computer simulation and the experiment.