• Title/Summary/Keyword: information fusion

Search Result 1,880, Processing Time 0.03 seconds

Image Registration and Fusion between Passive Millimeter Wave Images and Visual Images (수동형 멀리미터파 영상과 가시 영상과의 정합 및 융합에 관한 연구)

  • Lee, Hyoung;Lee, Dong-Su;Yeom, Seok-Won;Son, Jung-Young;Guschin, Vladmir P.;Kim, Shin-Hwan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.6C
    • /
    • pp.349-354
    • /
    • 2011
  • Passive millimeter wave imaging has the capability of detecting concealed objects under clothing. Also, passive millimeter imaging can obtain interpretable images under low visibility conditions like rain, fog, smoke, and dust. However, the image quality is often degraded due to low spatial resolution, low signal level, and low temperature resolution. This paper addresses image registration and fusion between passive millimeter images and visual images. The goal of this study is to combine and visualize two different types of information together: human subject's identity and concealed objects. The image registration process is composed of body boundary detection and an affine transform maximizing cross-correlation coefficients of two edge images. The image fusion process comprises three stages: discrete wavelet transform for image decomposition, a fusion rule for merging the coefficients, and the inverse transform for image synthesis. In the experiments, various types of metallic and non-metallic objects such as a knife, gel or liquid type beauty aids and a phone are detected by passive millimeter wave imaging. The registration and fusion process can visualize the meaningful information from two different types of sensors.

Image Captioning with Synergy-Gated Attention and Recurrent Fusion LSTM

  • Yang, You;Chen, Lizhi;Pan, Longyue;Hu, Juntao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.10
    • /
    • pp.3390-3405
    • /
    • 2022
  • Long Short-Term Memory (LSTM) combined with attention mechanism is extensively used to generate semantic sentences of images in image captioning models. However, features of salient regions and spatial information are not utilized sufficiently in most related works. Meanwhile, the LSTM also suffers from the problem of underutilized information in a single time step. In the paper, two innovative approaches are proposed to solve these problems. First, the Synergy-Gated Attention (SGA) method is proposed, which can process the spatial features and the salient region features of given images simultaneously. SGA establishes a gated mechanism through the global features to guide the interaction of information between these two features. Then, the Recurrent Fusion LSTM (RF-LSTM) mechanism is proposed, which can predict the next hidden vectors in one time step and improve linguistic coherence by fusing future information. Experimental results on the benchmark dataset of MSCOCO show that compared with the state-of-the-art methods, the proposed method can improve the performance of image captioning model, and achieve competitive performance on multiple evaluation indicators.

Camera and LiDAR Sensor Fusion for Improving Object Detection (카메라와 라이다의 객체 검출 성능 향상을 위한 Sensor Fusion)

  • Lee, Jongseo;Kim, Mangyu;Kim, Hakil
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.580-591
    • /
    • 2019
  • This paper focuses on to improving object detection performance using the camera and LiDAR on autonomous vehicle platforms by fusing detected objects from individual sensors through a late fusion approach. In the case of object detection using camera sensor, YOLOv3 model was employed as a one-stage detection process. Furthermore, the distance estimation of the detected objects is based on the formulations of Perspective matrix. On the other hand, the object detection using LiDAR is based on K-means clustering method. The camera and LiDAR calibration was carried out by PnP-Ransac in order to calculate the rotation and translation matrix between two sensors. For Sensor fusion, intersection over union(IoU) on the image plane with respective to the distance and angle on world coordinate were estimated. Additionally, all the three attributes i.e; IoU, distance and angle were fused using logistic regression. The performance evaluation in the sensor fusion scenario has shown an effective 5% improvement in object detection performance compared to the usage of single sensor.

Robust Image Fusion Using Stationary Wavelet Transform (정상 웨이블렛 변환을 이용한 로버스트 영상 융합)

  • Kim, Hee-Hoon;Kang, Seung-Hyo;Park, Jea-Hyun;Ha, Hyun-Ho;Lim, Jin-Soo;Lim, Dong-Hoon
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.6
    • /
    • pp.1181-1196
    • /
    • 2011
  • Image fusion is the process of combining information from two or more source images of a scene into a single composite image with application to many fields, such as remote sensing, computer vision, robotics, medical imaging and defense. The most common wavelet-based fusion is discrete wavelet transform fusion in which the high frequency sub-bands and low frequency sub-bands are combined on activity measures of local windows such standard deviation and mean, respectively. However, discrete wavelet transform is not translation-invariant and it often yields block artifacts in a fused image. In this paper, we propose a robust image fusion based on the stationary wavelet transform to overcome the drawback of discrete wavelet transform. We use the activity measure of interquartile range as the robust estimator of variance in high frequency sub-bands and combine the low frequency sub-band based on the interquartile range information present in the high frequency sub-bands. We evaluate our proposed method quantitatively and qualitatively for image fusion, and compare it to some existing fusion methods. Experimental results indicate that the proposed method is more effective and can provide satisfactory fusion results.

GPS/INS Fusion Using Multiple Compensation Method Based on Kalman Filter (칼만 필터를 이용한 GPS/INS융합의 다중 보정 방법)

  • Kwon, Youngmin
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.5
    • /
    • pp.190-196
    • /
    • 2015
  • In this paper, we propose multiple location error compensation algorithm for GPS/INS fusion using kalman filter and introduce the way to reduce location error in 9-axis navigation devices for implementing inertial navigation technique. When evaluating location, there is an increase of location error. So navigation systems need robust algorithms to compensate location error in GPS/INS fusion. In order to improve robustness of 9-axis inertial sensor(mpu-9150) over its disturbance, we used tilt compensation method using compensation algorithm of acceleration sensor and Yaw angle compensation to have exact azimuth information of the object. And it shows improved location result using these methods combined with kalman filter.

Data Alignment for Data Fusion in Wireless Multimedia Sensor Networks Based on M2M

  • Cruz, Jose Roberto Perez;Hernandez, Saul E. Pomares;Cote, Enrique Munoz De
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.1
    • /
    • pp.229-240
    • /
    • 2012
  • Advances in MEMS and CMOS technologies have motivated the development of low cost/power sensors and wireless multimedia sensor networks (WMSN). The WMSNs were created to ubiquitously harvest multimedia content. Such networks have allowed researchers and engineers to glimpse at new Machine-to-Machine (M2M) Systems, such as remote monitoring of biosignals for telemedicine networks. These systems require the acquisition of a large number of data streams that are simultaneously generated by multiple distributed devices. This paradigm of data generation and transmission is known as event-streaming. In order to be useful to the application, the collected data requires a preprocessing called data fusion, which entails the temporal alignment task of multimedia data. A practical way to perform this task is in a centralized manner, assuming that the network nodes only function as collector entities. However, by following this scheme, a considerable amount of redundant information is transmitted to the central entity. To decrease such redundancy, data fusion must be performed in a collaborative way. In this paper, we propose a collaborative data alignment approach for event-streaming. Our approach identifies temporal relationships by translating temporal dependencies based on a timeline to causal dependencies of the media involved.

A Study on the Improvement of Image Fusion Accuracy Using Smoothing Filter-based Replacement Method (SFR 기법을 이용한 영상 융합의 정확도 향상에 관한 연구)

  • Yun Kong-Hyun;Sohn Hong-Gyoo
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2006.04a
    • /
    • pp.187-192
    • /
    • 2006
  • Image fusion techniques are widely used to integrate a lower spatial resolution multispectral image with a higher spatial resolution panchromatic image. However, the existing techniques either cannot avoid distorting the image spectral properties or involve complicated and time-consuming decomposition and reconstruction processing in the case of wavelet transform-based fusion. In this study a simple spectral preserve fusion technique: the Smoothing Filter-based Replacement(SFR) is proposed based on a simplified solar radiation and land surface reflection model. By using a ratio between a higher resolution image and its low pass filtered (with a smoothing filter) image, spatial details can be injected to a co-registered lower resolution multispectral image minimizing its spectral properties and contrast. The technique can be applied to improve spatial resolution for either colour composites or individual bands. The fidelity to spectral property and the spatial quality of SFM are convincingly demonstrated by an image fusion experiment using IKONOS panchromatic and multispectral images. The visual evaluation and statistical analysis compared with other image fusion techniques confirmed that SFR is a better fusion technique for preserving spectral information.

  • PDF

Performance Evaluation of Track-to-track Association and fusion in Distributed Multiple Radar Tracking (다중레이다 분산형 추적의 항적연관 및 융합 성능정가)

  • Choi, Won-Yong;Hong, Sun-Mog;Lee, Dong-Gwan;Jung, Jae-Kyung;Cho, Kil-Seok
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.11 no.6
    • /
    • pp.38-46
    • /
    • 2008
  • A distributed system for tracking multiple targets with a pair of multifunction radars is proposed and implemented. The system performs track-to-track association and track-to-track fusion at the fusion center to form fused tracks. The association and fusion are performed using target state information linked via communication nodes from a radar at a remote location. Many factors can affect the track-to-track association and fusion performances. They include delays in data transmission buffer of the remote radar, the error in estimating time-stamp of the remote radar, and the gating in track-to-track association. The effects on association and fusion performances due to these factors are investigated through extensive numerical simulations.

A Study on the Data Fusion for Data Enrichment (데이터 보강을 위한 데이터 통합기법에 관한 연구)

  • 정성석;김순영;김현진
    • The Korean Journal of Applied Statistics
    • /
    • v.17 no.3
    • /
    • pp.605-617
    • /
    • 2004
  • One of the best important thing in data mining process is the quality of data used. When we perform the mining on data with excellent quality, the potential value of data mining can be improved. In this paper, we propose the data fusion technique for data enrichment that one phase can improve data quality in KDD process. We attempted to add k-NN technique to the regression technique, to improve performance of fusion technique through reduction of the loss of information. Simulations were performed to compare the proposed data fusion technique with the regression technique. As a result, the newly proposed data fusion technique is characterized with low MSE in continuous fusion variables.

Geohazard Monitoring with Space and Geophysical Technology - An Introduction to the KJRS 21(1) Special Issue-

  • Kim Jeong Woo;Jeon Jeong-Soo;Lee Youn Soo
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.1
    • /
    • pp.3-13
    • /
    • 2005
  • National Research Lab Project 'Optimal Data Fusion of Geophysical and Geodetic Measurements for Geological Hazards Monitoring and Prediction' supported by Korea Ministry of Science and Technology is briefly described. The research focused on the geohazard analysis with geophysical and geodetic instruments such as superconducting gravimeter, seismometer, magnetometer, GPS, and Synthetic Aperture Radar. The aim of the NRL research is to verify the causes of geological hazards through optimal fusion of various observational data in three phases: surface data fusion using geodetic measurements; subsurface data fusion using geophysical measurements; and, finally fusion of both geodetic and geophysical data. The NRL hosted a special session 'Geohazard Monitoring with Space and Geophysical Technology' during the International Symposium on Remote Sensing in 2004 to discuss the current topics, challenges and possible directions in the geohazard research. Here, we briefly describe the special session papers and their relationships to the theme of the special session. The fusion of satellite and ground geophysical and geodetic data gives us new insight on the monitoring and prediction of the geological hazard.