• Title/Summary/Keyword: Data fusion

Search Result 1,564, Processing Time 0.028 seconds

Forward Collision Warning System based on Radar driven Fusion with Camera (레이더/카메라 센서융합을 이용한 전방차량 충돌경보 시스템)

  • Moon, Seungwuk;Moon, Il Ki;Shin, Kwangkeun
    • Journal of Auto-vehicle Safety Association
    • /
    • v.5 no.1
    • /
    • pp.5-10
    • /
    • 2013
  • This paper describes a Forward Collision Warning (FCW) system based on the radar driven fusion with camera. The objective of FCW system is to provide an appropriate alert with satisfying the evaluation scenarios of US-NCAP and a driver acceptance. For this purpose, this paper proposed a data fusion algorithm and a collision warning algorithm. The data fusion algorithm generates information of fusion target depending on the confidence of camera sensor. The collision warning algorithm calculates indexes and determines an appropriate alert-timing by using analysis results of manual driving data. The FCW system with the proposed data fusion and collision warning algorithm was investigated via scenarios of US-NCAP and a real-road driving. It is shown that the proposed FCW system can improve the accuracy of an alarm-timing and reduce the false alarm in real roads.

Building Detection by Convolutional Neural Network with Infrared Image, LiDAR Data and Characteristic Information Fusion (적외선 영상, 라이다 데이터 및 특성정보 융합 기반의 합성곱 인공신경망을 이용한 건물탐지)

  • Cho, Eun Ji;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.6
    • /
    • pp.635-644
    • /
    • 2020
  • Object recognition, detection and instance segmentation based on DL (Deep Learning) have being used in various practices, and mainly optical images are used as training data for DL models. The major objective of this paper is object segmentation and building detection by utilizing multimodal datasets as well as optical images for training Detectron2 model that is one of the improved R-CNN (Region-based Convolutional Neural Network). For the implementation, infrared aerial images, LiDAR data, and edges from the images, and Haralick features, that are representing statistical texture information, from LiDAR (Light Detection And Ranging) data were generated. The performance of the DL models depends on not only on the amount and characteristics of the training data, but also on the fusion method especially for the multimodal data. The results of segmenting objects and detecting buildings by applying hybrid fusion - which is a mixed method of early fusion and late fusion - results in a 32.65% improvement in building detection rate compared to training by optical image only. The experiments demonstrated complementary effect of the training multimodal data having unique characteristics and fusion strategy.

A Case Study of Land-cover Classification Based on Multi-resolution Data Fusion of MODIS and Landsat Satellite Images (MODIS 및 Landsat 위성영상의 다중 해상도 자료 융합 기반 토지 피복 분류의 사례 연구)

  • Kim, Yeseul
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1035-1046
    • /
    • 2022
  • This study evaluated the applicability of multi-resolution data fusion for land-cover classification. In the applicability evaluation, a spatial time-series geostatistical deconvolution/fusion model (STGDFM) was applied as a multi-resolution data fusion model. The study area was selected as some agricultural lands in Iowa State, United States. As input data for multi-resolution data fusion, Moderate Resolution Imaging Spectroradiometer (MODIS) and Landsat satellite images were used considering the landscape of study area. Based on this, synthetic Landsat images were generated at the missing date of Landsat images by applying STGDFM. Then, land-cover classification was performed using both the acquired Landsat images and the STGDFM fusion results as input data. In particular, to evaluate the applicability of multi-resolution data fusion, two classification results using only Landsat images and using both Landsat images and fusion results were compared and evaluated. As a result, in the classification result using only Landsat images, the mixed patterns were prominent in the corn and soybean cultivation areas, which are the main land-cover type in study area. In addition, the mixed patterns between land-cover types of vegetation such as hay and grain areas and grass areas were presented to be large. On the other hand, in the classification result using both Landsat images and fusion results, these mixed patterns between land-cover types of vegetation as well as corn and soybean were greatly alleviated. Due to this, the classification accuracy was improved by about 20%p in the classification result using both Landsat images and fusion results. It was considered that the missing of the Landsat images could be compensated for by reflecting the time-series spectral information of the MODIS images in the fusion results through STGDFM. This study confirmed that multi-resolution data fusion can be effectively applied to land-cover classification.

Geohazard Monitoring with Space and Geophysical Technology - An Introduction to the KJRS 21(1) Special Issue-

  • Kim Jeong Woo;Jeon Jeong-Soo;Lee Youn Soo
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.1
    • /
    • pp.3-13
    • /
    • 2005
  • National Research Lab Project 'Optimal Data Fusion of Geophysical and Geodetic Measurements for Geological Hazards Monitoring and Prediction' supported by Korea Ministry of Science and Technology is briefly described. The research focused on the geohazard analysis with geophysical and geodetic instruments such as superconducting gravimeter, seismometer, magnetometer, GPS, and Synthetic Aperture Radar. The aim of the NRL research is to verify the causes of geological hazards through optimal fusion of various observational data in three phases: surface data fusion using geodetic measurements; subsurface data fusion using geophysical measurements; and, finally fusion of both geodetic and geophysical data. The NRL hosted a special session 'Geohazard Monitoring with Space and Geophysical Technology' during the International Symposium on Remote Sensing in 2004 to discuss the current topics, challenges and possible directions in the geohazard research. Here, we briefly describe the special session papers and their relationships to the theme of the special session. The fusion of satellite and ground geophysical and geodetic data gives us new insight on the monitoring and prediction of the geological hazard.

the fusion of LiDAR Data and high resolution Image for the Precise Monitoring in Urban Areas (도심의 정밀 모니터링을 위한 LiDAR 자료와 고해상영상의 융합)

  • 강준묵;강영미;이형석
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2004.04a
    • /
    • pp.383-388
    • /
    • 2004
  • The fusion of a different kind sensor is fusion of the obtained data by the respective independent technology. This is a important technology for the construction of 3D spatial information. particularly, information is variously realized by the fusion of LiDAR and mobile scanning system and digital map, fusion of LiDAR data and high resolution, LiDAR etc. This study is to generate union DEM and digital ortho image by the fusion of LiDAR data and high resolution image and monitor precisely topology, building, trees etc in urban areas using the union DEM and digital ortho image. using only the LiDAR data has some problems because it needs manual linearization and subjective reconstruction.

  • PDF

Short Range Target Tracking Based on Data Fusion Method Using Asynchronous Dissimilar Sensors (비동기 이종 센서를 이용한 데이터 융합기반 근거리 표적 추적기법)

  • Lee, Eui-Hyuk
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.335-343
    • /
    • 2012
  • This paper presents an target tracking algorithm for fusion of radar and infrared(IR) sensor measurement data. Generally, fusion methods with Kalman filter assume that processing data obtained by radar and IR sensor are synchronized. It has much limitation to apply the fusion methods to real systems. A key point which is taken into account in the proposed algorithm is the fact that two asynchronous dissimilar data are fused by compensating the time difference of the measurements using radar's ranges and track state vectors. The proposed fusion algorithm in the paper is evaluated via a computer simulation with the existing track fusion and measurement fusion methods.

FusionScan: accurate prediction of fusion genes from RNA-Seq data

  • Kim, Pora;Jang, Ye Eun;Lee, Sanghyuk
    • Genomics & Informatics
    • /
    • v.17 no.3
    • /
    • pp.26.1-26.12
    • /
    • 2019
  • Identification of fusion gene is of prominent importance in cancer research field because of their potential as carcinogenic drivers. RNA sequencing (RNA-Seq) data have been the most useful source for identification of fusion transcripts. Although a number of algorithms have been developed thus far, most programs produce too many false-positives, thus making experimental confirmation almost impossible. We still lack a reliable program that achieves high precision with reasonable recall rate. Here, we present FusionScan, a highly optimized tool for predicting fusion transcripts from RNA-Seq data. We specifically search for split reads composed of intact exons at the fusion boundaries. Using 269 known fusion cases as the reference, we have implemented various mapping and filtering strategies to remove false-positives without discarding genuine fusions. In the performance test using three cell line datasets with validated fusion cases (NCI-H660, K562, and MCF-7), FusionScan outperformed other existing programs by a considerable margin, achieving the precision and recall rates of 60% and 79%, respectively. Simulation test also demonstrated that FusionScan recovered most of true positives without producing an overwhelming number of false-positives regardless of sequencing depth and read length. The computation time was comparable to other leading tools. We also provide several curative means to help users investigate the details of fusion candidates easily. We believe that FusionScan would be a reliable, efficient and convenient program for detecting fusion transcripts that meet the requirements in the clinical and experimental community. FusionScan is freely available at http://fusionscan.ewha.ac.kr/.

Robust Hierarchical Data Fusion Scheme for Large-Scale Sensor Network

  • Song, Il Young
    • Journal of Sensor Science and Technology
    • /
    • v.26 no.1
    • /
    • pp.1-6
    • /
    • 2017
  • The advanced driver assistant system (ADAS) requires the collection of a large amount of information including road conditions, environment, vehicle status, condition of the driver, and other useful data. In this regard, large-scale sensor networks can be an appropriate solution since they have been designed for this purpose. Recent advances in sensor network technology have enabled the management and monitoring of large-scale tasks such as the monitoring of road surface temperature on a highway. In this paper, we consider the estimation and fusion problems of the large-scale sensor networks used in the ADAS. Hierarchical fusion architecture is proposed for an arbitrary topology of the large-scale sensor network. A robust cluster estimator is proposed to achieve robustness of the network against outliers or failure of sensors. Lastly, a robust hierarchical data fusion scheme is proposed for the communication channel between the clusters and fusion center, considering the non-Gaussian channel noise, which is typical in communication systems.

A Study of Observability Analysis and Data Fusion for Bias Estimation in a Multi-Radar System (다중 레이더 환경에서의 바이어스 오차 추정의 가관측성에 대한 연구와 정보 융합)

  • Won, Gun-Hee;Song, Taek-Lyul;Kim, Da-Sol;Seo, Il-Hwan;Hwang, Gyu-Hwan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.8
    • /
    • pp.783-789
    • /
    • 2011
  • Target tracking performance improvement using multi-sensor data fusion is a challenging work. However, biases in the measurements should be removed before various data fusion techniques are applied. In this paper, a bias removing algorithm using measurement data from multi-radar tracking systems is proposed and evaluated by computer simulation. To predict bias estimation performance in various geometric relations between the radar systems and target, a system observability index is proposed and tested via computer simulation results. It is also studied that target tracking which utilizes multi-sensor data fusion with bias-removed measurements results in better performance.

A study on Fusion image Expressed in Hair collections - Focusing on Juno Hair's 2013-2022 collection

  • Jin Hyun Park;Hye Rroon Jang
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.4
    • /
    • pp.202-209
    • /
    • 2023
  • In the 21st century, the dualistic worldview of the Cold War era collapsed and we entered an era of new creation and fusion. The fusion of different designs between East and West, the design activities of traditional clothing from the past, the use of new materials that are continuously being developed, and the mixing of unique items are being conducted in various fields. However, research is being conducted by combining fusion characteristics with hair. In addition, the period is short and the amount of research is small. Therefore, this study analyzed hairstyles of fusion images shown in hair collection using data of Juno Hair collection from 2013 to 2022 as analysis data and examined types of fusion images shown in the work of folk images, mixed images, and future images. Oriented images were divided into three categories and analyzed. In this study, we added Results of such research can be used not only as data for predicting future fashion trends, but also as basic data for exploring new design developments. In future research, it is expected that convergent research will be conducted, such as analyzing fusion images from an integrated perspective.