• Title/Summary/Keyword: information fusion

Search Result 1,890, Processing Time 0.031 seconds

A Study on the Data Fusion for Data Enrichment (데이터 보강을 위한 데이터 통합기법에 관한 연구)

  • 정성석;김순영;김현진
    • The Korean Journal of Applied Statistics
    • /
    • v.17 no.3
    • /
    • pp.605-617
    • /
    • 2004
  • One of the best important thing in data mining process is the quality of data used. When we perform the mining on data with excellent quality, the potential value of data mining can be improved. In this paper, we propose the data fusion technique for data enrichment that one phase can improve data quality in KDD process. We attempted to add k-NN technique to the regression technique, to improve performance of fusion technique through reduction of the loss of information. Simulations were performed to compare the proposed data fusion technique with the regression technique. As a result, the newly proposed data fusion technique is characterized with low MSE in continuous fusion variables.

Geohazard Monitoring with Space and Geophysical Technology - An Introduction to the KJRS 21(1) Special Issue-

  • Kim Jeong Woo;Jeon Jeong-Soo;Lee Youn Soo
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.1
    • /
    • pp.3-13
    • /
    • 2005
  • National Research Lab Project 'Optimal Data Fusion of Geophysical and Geodetic Measurements for Geological Hazards Monitoring and Prediction' supported by Korea Ministry of Science and Technology is briefly described. The research focused on the geohazard analysis with geophysical and geodetic instruments such as superconducting gravimeter, seismometer, magnetometer, GPS, and Synthetic Aperture Radar. The aim of the NRL research is to verify the causes of geological hazards through optimal fusion of various observational data in three phases: surface data fusion using geodetic measurements; subsurface data fusion using geophysical measurements; and, finally fusion of both geodetic and geophysical data. The NRL hosted a special session 'Geohazard Monitoring with Space and Geophysical Technology' during the International Symposium on Remote Sensing in 2004 to discuss the current topics, challenges and possible directions in the geohazard research. Here, we briefly describe the special session papers and their relationships to the theme of the special session. The fusion of satellite and ground geophysical and geodetic data gives us new insight on the monitoring and prediction of the geological hazard.

Visual Attention Model Based on Particle Filter

  • Liu, Long;Wei, Wei;Li, Xianli;Pan, Yafeng;Song, Houbing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.8
    • /
    • pp.3791-3805
    • /
    • 2016
  • The visual attention mechanism includes 2 attention models, the bottom-up (B-U) and the top-down (T-D), the physiology of which have not yet been accurately described. In this paper, the visual attention mechanism is regarded as a Bayesian fusion process, and a visual attention model based on particle filter is proposed. Under certain particular assumed conditions, a calculation formula of Bayesian posterior probability is deduced. The visual attention fusion process based on the particle filter is realized through importance sampling, particle weight updating, and resampling, and visual attention is finally determined by the particle distribution state. The test results of multigroup images show that the calculation result of this model has better subjective and objective effects than that of other models.

Map Building Based on Sensor Fusion for Autonomous Vehicle (자율주행을 위한 센서 데이터 융합 기반의 맵 생성)

  • Kang, Minsung;Hur, Soojung;Park, Ikhyun;Park, Yongwan
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.6
    • /
    • pp.14-22
    • /
    • 2014
  • An autonomous vehicle requires a technology of generating maps by recognizing surrounding environment. The recognition of the vehicle's environment can be achieved by using distance information from a 2D laser scanner and color information from a camera. Such sensor information is used to generate 2D or 3D maps. A 2D map is used mostly for generating routs, because it contains information only about a section. In contrast, a 3D map involves height values also, and therefore can be used not only for generating routs but also for finding out vehicle accessible space. Nevertheless, an autonomous vehicle using 3D maps has difficulty in recognizing environment in real time. Accordingly, this paper proposes the technology for generating 2D maps that guarantee real-time recognition. The proposed technology uses only the color information obtained by removing height values from 3D maps generated based on the fusion of 2D laser scanner and camera data.

Depthmap Generation with Registration of LIDAR and Color Images with Different Field-of-View (다른 화각을 가진 라이다와 칼라 영상 정보의 정합 및 깊이맵 생성)

  • Choi, Jaehoon;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.28-34
    • /
    • 2020
  • This paper proposes an approach to the fusion of two heterogeneous sensors with two different fields-of-view (FOV): LIDAR and an RGB camera. Registration between data captured by LIDAR and an RGB camera provided the fusion results. Registration was completed once a depthmap corresponding to a 2-dimensional RGB image was generated. For this fusion, RPLIDAR-A3 (manufactured by Slamtec) and a general digital camera were used to acquire depth and image data, respectively. LIDAR sensor provided distance information between the sensor and objects in a scene nearby the sensor, and an RGB camera provided a 2-dimensional image with color information. Fusion of 2D image and depth information enabled us to achieve better performance with applications of object detection and tracking. For instance, automatic driver assistance systems, robotics or other systems that require visual information processing might find the work in this paper useful. Since the LIDAR only provides depth value, processing and generation of a depthmap that corresponds to an RGB image is recommended. To validate the proposed approach, experimental results are provided.

Potential for Image Fusion Quality Improvement through Shadow Effects Correction (그림자효과 보정을 통한 영상융합 품질 향상 가능성)

  • 손홍규;윤공현
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2003.10a
    • /
    • pp.397-402
    • /
    • 2003
  • This study is aimed to improve the quality of image fusion results through shadow effects correction. For this, shadow effects correction algorithm is proposed and visual comparisons have been made to estimate the quality of image fusion results. The following four steps have been performed to improve the image fusion qualify First, the shadow regions of satellite image are precisely located. Subsequently, segmentation of context regions is manually implemented for accurate correction. Next step, to calculate correction factor we compared the context region with the same non-shadow context region. Finally, image fusion is implemented using collected images. The result presented here helps to accurately extract and interpret geo-spatial information from satellite imagery.

  • PDF

An Approach to Improve the Contrast of Multi Scale Fusion Methods

  • Hwang, Tae Hun;Kim, Jin Heon
    • Journal of Multimedia Information System
    • /
    • v.5 no.2
    • /
    • pp.87-90
    • /
    • 2018
  • Various approaches have been proposed to convert low dynamic range (LDR) to high dynamic range (HDR). Of these approaches, the Multi Scale Fusion (MSF) algorithm based on Laplacian pyramid decomposition is used in many applications and demonstrates its usefulness. However, the pyramid fusion technique has no means for controlling the luminance component because the total number of pixels decreases as the pyramid rises to the upper layer. In this paper, we extract the reflection light of the image based on the Retinex theory and generate the weight map by adjusting the reflection component. This weighting map is applied to achieve an MSF-like effect during image fusion and provides an opportunity to control the brightness components. Experimental results show that the proposed method maintains the total number of pixels and exhibits similar effects to the conventional method.

An Analysis on the Range of Singular Fusion of Augmented Reality Devices

  • Lee, Hanul;Park, Minyoung;Lee, Hyeontaek;Choi, Hee-Jin
    • Current Optics and Photonics
    • /
    • v.4 no.6
    • /
    • pp.540-544
    • /
    • 2020
  • Current two-dimensional (2D) augmented reality (AR) devices present virtual image and information to a fixed focal plane, regardless of the various locations of ambient objects of interest around the observer. This limitation can lead to a visual discomfort caused by misalignments between the view of the ambient object of interest and the visual representation on the AR device due to a failing of the singular fusion. Since the misalignment becomes more severe as the depth difference gets greater, it can hamper visual understanding of the scene, interfering with task performance of the viewer. Thus, we analyzed the range of singular fusion (RSF) of AR images within which viewers can perceive the shape of an object presented on two different depth planes without difficulty due to the failure of singular fusion. It is expected that our analysis can inspire the development of advanced AR systems with low visual discomfort.

MOSAICFUSION: MERGING MODALITIES WITH PARTIAL DIFFERENTIAL EQUATION AND DISCRETE COSINE TRANSFORMATION

  • GARGI TRIVEDI;RAJESH SANGHAVI
    • Journal of Applied and Pure Mathematics
    • /
    • v.5 no.5_6
    • /
    • pp.389-406
    • /
    • 2023
  • In the pursuit of enhancing image fusion techniques, this research presents a novel approach for fusing multimodal images, specifically infrared (IR) and visible (VIS) images, utilizing a combination of partial differential equations (PDE) and discrete cosine transformation (DCT). The proposed method seeks to leverage the thermal and structural information provided by IR imaging and the fine-grained details offered by VIS imaging create composite images that are superior in quality and informativeness. Through a meticulous fusion process, which involves PDE-guided fusion, DCT component selection, and weighted combination, the methodology aims to strike a balance that optimally preserves essential features and minimizes artifacts. Rigorous evaluations, both objective and subjective, are conducted to validate the effectiveness of the approach. This research contributes to the ongoing advancement of multimodal image fusion, addressing applications in fields like medical imaging, surveillance, and remote sensing, where the marriage of IR and VIS data is of paramount importance.

Gradient Fusion Method for Night Video Enhancement

  • Rao, Yunbo;Zhang, Yuhong;Gou, Jianping
    • ETRI Journal
    • /
    • v.35 no.5
    • /
    • pp.923-926
    • /
    • 2013
  • To resolve video enhancement problems, a novel method of gradient domain fusion wherein gradient domain frames of the background in daytime video are fused with nighttime video frames is proposed. To verify the superiority of the proposed method, it is compared to conventional techniques. The implemented output of our method is shown to offer enhanced visual quality.