• Title/Summary/Keyword: Data fusion

Search Result 1,564, Processing Time 0.032 seconds

Motion Estimation of 3D Planar Objects using Multi-Sensor Data Fusion (센서 융합을 이용한 움직이는 물체의 동작예측에 관한 연구)

  • Yang, Woo-Suk
    • Journal of Sensor Science and Technology
    • /
    • v.5 no.4
    • /
    • pp.57-70
    • /
    • 1996
  • Motion can be estimated continuously from each sensor through the analysis of the instantaneous states of an object. This paper is aimed to introduce a method to estimate the general 3D motion of a planar object from the instantaneous states of an object using multi-sensor data fusion. The instantaneous states of an object is estimated using the linear feedback estimation algorithm. The motion estimated from each sensor is fused to provide more accurate and reliable information about the motion of an unknown planar object. We present a fusion algorithm which combines averaging and deciding. With the assumption that the motion is smooth, the approach can handle the data sequences from multiple sensors with different sampling times. Simulation results show proposed algorithm is advantageous in terms of accuracy, speed, and versatility.

  • PDF

A Study on the Effect of Data Fusion on the Retrieval Effectiveness of Web Documents (데이터 결합이 웹 문서 검색성능에 미치는 영향 연구)

  • Park, Ok-Hwa;Chung, Young-Mee
    • Journal of Information Management
    • /
    • v.38 no.1
    • /
    • pp.1-19
    • /
    • 2007
  • This study investigates the effect of data fusion on the retrieval effectiveness by performing an experiment combining multiple representations of Web documents. The types of document representation combined in the study include content terms, links, anchor text, and URL. The experimental results showed that the data fusion technique combining document representation methods in Web environment did not bring any significant improvement in retrieval effectiveness.

Predictive Spatial Data Fusion Using Fuzzy Object Representation and Integration: Application to Landslide Hazard Assessment

  • Park, No-Wook;Chi, Kwang-Hoon;Chung, Chang-Jo;Kwon, Byung-Doo
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.3
    • /
    • pp.233-246
    • /
    • 2003
  • This paper presents a methodology to account for the partial or gradual changes of environmental phenomena in categorical map information for the fusion/integration of multiple spatial data. The fuzzy set based spatial data fusion scheme is applied in order to account for the fuzziness of boundaries in categorical information showing the partial or gradual environmental impacts. The fuzziness or uncertainty of boundary is represented as two kinds of fuzzy membership functions based on fuzzy object concept and the effects of them are quantitatively evaluated with the help of a cross validation procedure. A case study for landslide hazard assessment demonstrates the better performance of this scheme as compared to traditional crisp boundary representation.

Street Fashion Information Analysis System Design Using Data Fusion

  • Park, Hye-Won;Park, Hee-Chang
    • Journal of the Korean Data and Information Science Society
    • /
    • v.16 no.4
    • /
    • pp.879-888
    • /
    • 2005
  • Fashion is hard to expect owing to the rapid change in accordance with consumer taste and environment, and has a tendency toward variety and individuality. Especially street fashion of 21st century is not being regarded as one of the subcultures but is playing an important role as a fountainhead of fashion trend. Therefore, Searching and analyzing street fashions helps us to understand the popular fashions of the next season and also it is important in understanding the consumer fashion sense and commercial area. So, we need to understand fashion styles quantitatively and qualitatively by providing visual data and dividing images. There are many kinds of data in street fashion information. The purpose of this study is to design and implementation for street fashion information analysis system using data fusion. We can show visual information of customer's viewpoint because the system can analyze the fused data for image data and survey data.

  • PDF

A study on the alignment of different sensor data with areial images and lidar data (항공영상과 라이다 자료를 이용한 이종센서 자료간의 alignment에 관한 연구)

  • 곽태석;이재빈;조현기;김용일
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2004.11a
    • /
    • pp.257-262
    • /
    • 2004
  • The purpose of data fusion is collecting maximized information from combining the data attained from more than two same or different kind sensor systems. Data fusion of same kind sensor systems like optical imagery has been on focus, but recently, LIDAR emerged as a new technology for capturing rapidally data on physical surfaces and the high accuray results derived from the LIDAR data. Considering the nature of aerial imagery and LIDAR data, it is clear that the two systems provide complementary information. Data fusion is consisted of two steps, alignment and matching. However, the complementary information can only be fully utilized after sucessful alignment of the aerial imagery and lidar data. In this research, deal with centroid of building extracted from lidar data as control information for estimating exterior orientation parameters of aerial imagery relative to the LIDAR reference frame.

  • PDF

A data fusion method for bridge displacement reconstruction based on LSTM networks

  • Duan, Da-You;Wang, Zuo-Cai;Sun, Xiao-Tong;Xin, Yu
    • Smart Structures and Systems
    • /
    • v.29 no.4
    • /
    • pp.599-616
    • /
    • 2022
  • Bridge displacement contains vital information for bridge condition and performance. Due to the limits of direct displacement measurement methods, the indirect displacement reconstruction methods based on the strain or acceleration data are also developed in engineering applications. There are still some deficiencies of the displacement reconstruction methods based on strain or acceleration in practice. This paper proposed a novel method based on long short-term memory (LSTM) networks to reconstruct the bridge dynamic displacements with the strain and acceleration data source. The LSTM networks with three hidden layers are utilized to map the relationships between the measured responses and the bridge displacement. To achieve the data fusion, the input strain and acceleration data need to be preprocessed by normalization and then the corresponding dynamic displacement responses can be reconstructed by the LSTM networks. In the numerical simulation, the errors of the displacement reconstruction are below 9% for different load cases, and the proposed method is robust when the input strain and acceleration data contains additive noise. The hyper-parameter effect is analyzed and the displacement reconstruction accuracies of different machine learning methods are compared. For experimental verification, the errors are below 6% for the simply supported beam and continuous beam cases. Both the numerical and experimental results indicate that the proposed data fusion method can accurately reconstruct the displacement.

A Novel Clustering Method with Time Interval for Context Inference based on the Multi-sensor Data Fusion (다중센서 데이터융합 기반 상황추론에서 시간경과를 고려한 클러스터링 기법)

  • Ryu, Chang-Keun;Park, Chan-Bong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.3
    • /
    • pp.397-402
    • /
    • 2013
  • Time variation is the essential component of the context awareness. It is a beneficial way not only including time lapse but also clustering time interval for the context inference using the information from sensor mote. In this study, we proposed a novel way of clustering based multi-sensor data fusion for the context inference. In the time interval, we fused the sensed signal of each time slot, and fused again with the results of th first fusion. We could reach the enhanced context inference with assessing the segmented signal according to the time interval at the Dempster-Shafer evidence theory based multi-sensor data fusion.

Control of Mobile Robot Navigation Using Vision Sensor Data Fusion by Nonlinear Transformation (비선형 변환의 비젼센서 데이터융합을 이용한 이동로봇 주행제어)

  • Jin Tae-Seok;Lee Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.4
    • /
    • pp.304-313
    • /
    • 2005
  • The robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, robot need to recognize his position and direction for intelligent performance in an unknown environment. And the mobile robots may navigate by means of a number of monitoring systems such as the sonar-sensing system or the visual-sensing system. Notice that in the conventional fusion schemes, the measurement is dependent on the current data sets only. Therefore, more of sensors are required to measure a certain physical parameter or to improve the accuracy of the measurement. However, in this research, instead of adding more sensors to the system, the temporal sequence of the data sets are stored and utilized for the accurate measurement. As a general approach of sensor fusion, a UT -Based Sensor Fusion(UTSF) scheme using Unscented Transformation(UT) is proposed for either joint or disjoint data structure and applied to the landmark identification for mobile robot navigation. Theoretical basis is illustrated by examples and the effectiveness is proved through the simulations and experiments. The newly proposed, UT-Based UTSF scheme is applied to the navigation of a mobile robot in an unstructured environment as well as structured environment, and its performance is verified by the computer simulation and the experiment.

Traffic Flow Prediction with Spatio-Temporal Information Fusion using Graph Neural Networks

  • Huijuan Ding;Giseop Noh
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.88-97
    • /
    • 2023
  • Traffic flow prediction is of great significance in urban planning and traffic management. As the complexity of urban traffic increases, existing prediction methods still face challenges, especially for the fusion of spatiotemporal information and the capture of long-term dependencies. This study aims to use the fusion model of graph neural network to solve the spatio-temporal information fusion problem in traffic flow prediction. We propose a new deep learning model Spatio-Temporal Information Fusion using Graph Neural Networks (STFGNN). We use GCN module, TCN module and LSTM module alternately to carry out spatiotemporal information fusion. GCN and multi-core TCN capture the temporal and spatial dependencies of traffic flow respectively, and LSTM connects multiple fusion modules to carry out spatiotemporal information fusion. In the experimental evaluation of real traffic flow data, STFGNN showed better performance than other models.

Network Modeling and Analysis of Multi Radar Data Fusion for Efficient Detection of Aircraft Position (효율적인 항공기 위치 파악을 위한 다중 레이더 자료 융합의 네트워크 모델링 및 분석)

  • Kim, Jin-Wook;Cho, Tae-Hwan;Choi, Sang-Bang;Park, Hyo-Dal
    • Journal of Advanced Navigation Technology
    • /
    • v.18 no.1
    • /
    • pp.29-34
    • /
    • 2014
  • Data fusion techniques combine data from multiple radars and related information to achieve more accurate estimations than could be achieved by a single, independent radar. In this paper, we analyze delay and loss of packets to be processed by multiple radar and minimize data processing interval from centralized data processing operation as fusing multiple radar data. Therefore, we model radar network about central data fusion, and analyze delay and loss of packets inside queues on assuming queues respectively as the M/M/1/K using NS-2. We confirmed average delay time, processing fused multiple radar data, through the analysis data. And then, this delay time can be used as a reference time for radar data latency in fusion center.