• Title/Summary/Keyword: Multi-sensor data fusion

Search Result 127, Processing Time 0.032 seconds

Data Association and Its Applications to Intelligent Systems: A Review (데이터 연관 문제와 지능시스템에서의 응용: 리뷰)

  • Oh, Song-Hwai
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.3
    • /
    • pp.1-11
    • /
    • 2012
  • Data association plays an important role in intelligent systems. This paper presents the Bayesian formulation of data association and its applications to intelligent systems. We first describe the Bayesian formulation of data association developed for solving multi-target tracking problems in a cluttered environment. Then we review applications of data association in intelligent systems, including surveillance using wireless sensor networks, identity management for air traffic control, camera network localization, and multi-sensor fusion.

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

Evaluation of Spatio-temporal Fusion Models of Multi-sensor High-resolution Satellite Images for Crop Monitoring: An Experiment on the Fusion of Sentinel-2 and RapidEye Images (작물 모니터링을 위한 다중 센서 고해상도 위성영상의 시공간 융합 모델의 평가: Sentinel-2 및 RapidEye 영상 융합 실험)

  • Park, Soyeon;Kim, Yeseul;Na, Sang-Il;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.807-821
    • /
    • 2020
  • The objective of this study is to evaluate the applicability of representative spatio-temporal fusion models developed for the fusion of mid- and low-resolution satellite images in order to construct a set of time-series high-resolution images for crop monitoring. Particularly, the effects of the characteristics of input image pairs on the prediction performance are investigated by considering the principle of spatio-temporal fusion. An experiment on the fusion of multi-temporal Sentinel-2 and RapidEye images in agricultural fields was conducted to evaluate the prediction performance. Three representative fusion models, including Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), SParse-representation-based SpatioTemporal reflectance Fusion Model (SPSTFM), and Flexible Spatiotemporal DAta Fusion (FSDAF), were applied to this comparative experiment. The three spatio-temporal fusion models exhibited different prediction performance in terms of prediction errors and spatial similarity. However, regardless of the model types, the correlation between coarse resolution images acquired on the pair dates and the prediction date was more significant than the difference between the pair dates and the prediction date to improve the prediction performance. In addition, using vegetation index as input for spatio-temporal fusion showed better prediction performance by alleviating error propagation problems, compared with using fused reflectance values in the calculation of vegetation index. These experimental results can be used as basic information for both the selection of optimal image pairs and input types, and the development of an advanced model in spatio-temporal fusion for crop monitoring.

Performance enhancement of launch vehicle tracking using GPS-based multiple radar bias estimation and sensor fusion (GPS 기반 추적레이더 실시간 바이어스 추정 및 비동기 정보융합을 통한 발사체 추적 성능 개선)

  • Song, Ha-Ryong
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.20 no.6
    • /
    • pp.47-56
    • /
    • 2015
  • In the multi-sensor system, sensor registration errors such as a sensor bias must be corrected so that the individual sensor data are expressed in a common reference frame. If registration process is not properly executed, large tracking errors or formation of multiple track on the same target can be occured. Especially for launch vehicle tracking system, each multiple observation lies on the same reference frame and then fused trajectory can be the best track for slaving data. Hence, this paper describes an on-line bias estimation/correction and asynchronous sensor fusion for launch vehicle tracking. The bias estimation architecture is designed based on pseudo bias measurement which derived from error observation between GPS and radar measurements. Then, asynchronous sensor fusion is adapted to enhance tracking performance.

Locality Aware Multi-Sensor Data Fusion Model for Smart Environments (장소인식멀티센서스마트 환경을위한 데이터 퓨전 모델)

  • Nawaz, Waqas;Fahim, Muhammad;Lee, Sung-Young;Lee, Young-Koo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.04a
    • /
    • pp.78-80
    • /
    • 2011
  • In the area of data fusion, dealing with heterogeneous data sources, numerous models have been proposed in last three decades to facilitate different application domains i.e. Department of Defense (DoD), monitoring of complex machinery, medical diagnosis and smart buildings. All of these models shared the theme of multiple levels processing to get more reliable and accurate information. In this paper, we consider five most widely acceptable fusion models (Intelligence Cycle, Joint Directors of Laboratories, Boyd control, Waterfall, Omnibus) applied to different areas for data fusion. When they are exposed to a real scenario, where large dataset from heterogeneous sources is utilize for object monitoring, then it may leads us to non-efficient and unreliable information for decision making. The proposed variation works better in terms of time and accuracy due to prior data diminution.

UTV localization from fusion of Dead -reckoning and LBL System

  • Woon, Jeon-Sang;Jung Sul;Cheol, Won-Moon;Hong Sup
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.64.4-64
    • /
    • 2001
  • Localization is the key role in controlling the Mobile Robot. In this papers, a development of the sensor fusion algorithm for controling UTV(Unmanned Tracked Vehicle) is presented. The multi-sensocial dead-rocking subsystem is established based on the optimal filtering by first fusing heading angle reading from a magnetic compass, a rate-gyro and two encoders mouned on the robot wheels, thereby computing the deat-reckoned location. These data and the position data provoded by LBL system are fused together by means of an extended Kalman filter. This algorithm is proved by simulation studies.

  • PDF

Multiple Color and ToF Camera System for 3D Contents Generation

  • Ho, Yo-Sung
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.3
    • /
    • pp.175-182
    • /
    • 2017
  • In this paper, we present a multi-depth generation method using a time-of-flight (ToF) fusion camera system. Multi-view color cameras in the parallel type and ToF depth sensors are used for 3D scene capturing. Although each ToF depth sensor can measure the depth information of the scene in real-time, it has several problems to overcome. Therefore, after we capture low-resolution depth images by ToF depth sensors, we perform a post-processing to solve the problems. Then, the depth information of the depth sensor is warped to color image positions and used as initial disparity values. In addition, the warped depth data is used to generate a depth-discontinuity map for efficient stereo matching. By applying the stereo matching using belief propagation with the depth-discontinuity map and the initial disparity information, we have obtained more accurate and stable multi-view disparity maps in reduced time.

Multi-sensor data fusion based assessment on shield tunnel safety

  • Huang, Hongwei;Xie, Xin;Zhang, Dongming;Liu, Zhongqiang;Lacasse, Suzanne
    • Smart Structures and Systems
    • /
    • v.24 no.6
    • /
    • pp.693-707
    • /
    • 2019
  • This paper proposes an integrated safety assessment method that can take multiple sources data into consideration based on a data fusion approach. Data cleaning using the Kalman filter method (KF) was conducted first for monitoring data from each sensor. The inclination data from the four tilt sensors of the same monitoring section have been associated to synchronize in time. Secondly, the finite element method (FEM) model was established to physically correlate the external forces with various structural responses of the shield tunnel, including the measured inclination. Response surface method (RSM) was adopted to express the relationship between external forces and the structural responses. Then, the external forces were updated based on the in situ monitoring data from tilt sensors using the extended Kalman filter method (EKF). Finally, mechanics parameters of the tunnel lining were estimated based on the updated data to make an integrated safety assessment. An application example of the proposed method was presented for an urban tunnel during a nearby deep excavation with multiple source monitoring plans. The change of tunnel convergence, bolt stress and segment internal forces can also be calculated based on the real time deformation monitoring of the shield tunnel. The proposed method was verified by predicting the data using the other three sensors in the same section. The correlation among different monitoring data has been discussed before the conclusion was drawn.

Obstacle Avoidance and Planning using Optimization of Cost Fuction based Distributed Control Command (분산제어명령 기반의 비용함수 최소화를 이용한 장애물회피와 주행기법)

  • Bae, Dongseog;Jin, Taeseok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.21 no.3
    • /
    • pp.125-131
    • /
    • 2018
  • In this paper, we propose a homogeneous multisensor-based navigation algorithm for a mobile robot, which is intelligently searching the goal location in unknown dynamic environments with moving obstacles using multi-ultrasonic sensor. Instead of using "sensor fusion" method which generates the trajectory of a robot based upon the environment model and sensory data, "command fusion" method by fuzzy inference is used to govern the robot motions. The major factors for robot navigation are represented as a cost function. Using the data of the robot states and the environment, the weight value of each factor using fuzzy inference is determined for an optimal trajectory in dynamic environments. For the evaluation of the proposed algorithm, we performed simulations in PC as well as real experiments with mobile robot, AmigoBot. The results show that the proposed algorithm is apt to identify obstacles in unknown environments to guide the robot to the goal location safely.

Attitude Control of Quad-rotor by Improving the Reliability of Multi-Sensor System (다종 센서 융합의 신뢰성 향상을 통한 쿼드로터 자세 제어)

  • Yu, Dong Hyeon;Park, Jong Ho;Ryu, Ji Hyoung;Chong, Kil To
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.39 no.5
    • /
    • pp.517-526
    • /
    • 2015
  • This paper presents the results of study for improving the reliability of quadrotor attitude control by applying a multi-sensor along with a data fusion algorithm. First, a mathematical model of the quadrotor dynamics was developed. Then, using the quadrotor mathematical model, simulations were performed using the improved reliability multi-sensor data as the inputs. From the simulation results, we designed a Gimbal-equipped quadrotor system. With the quadrotor in a hover state, we performed experiments according to the angle change of the user's specifications. We then calculated the attitude control data from the actual experimental data. Furthermore, with additional simulations, we verified the performance of the designed quadrotor attitude control system with multiple sensors.