• 제목/요약/키워드: Data Fusion

검색결과 1,567건 처리시간 0.024초

레이더/카메라 센서융합을 이용한 전방차량 충돌경보 시스템 (Forward Collision Warning System based on Radar driven Fusion with Camera)

  • 문승욱;문일기;신광근
    • 자동차안전학회지
    • /
    • 제5권1호
    • /
    • pp.5-10
    • /
    • 2013
  • This paper describes a Forward Collision Warning (FCW) system based on the radar driven fusion with camera. The objective of FCW system is to provide an appropriate alert with satisfying the evaluation scenarios of US-NCAP and a driver acceptance. For this purpose, this paper proposed a data fusion algorithm and a collision warning algorithm. The data fusion algorithm generates information of fusion target depending on the confidence of camera sensor. The collision warning algorithm calculates indexes and determines an appropriate alert-timing by using analysis results of manual driving data. The FCW system with the proposed data fusion and collision warning algorithm was investigated via scenarios of US-NCAP and a real-road driving. It is shown that the proposed FCW system can improve the accuracy of an alarm-timing and reduce the false alarm in real roads.

적외선 영상, 라이다 데이터 및 특성정보 융합 기반의 합성곱 인공신경망을 이용한 건물탐지 (Building Detection by Convolutional Neural Network with Infrared Image, LiDAR Data and Characteristic Information Fusion)

  • 조은지;이동천
    • 한국측량학회지
    • /
    • 제38권6호
    • /
    • pp.635-644
    • /
    • 2020
  • 딥러닝(DL)을 이용한 객체인식, 탐지 및 분할하는 연구는 여러 분야에서 활용되고 있으며, 주로 영상을 DL 모델의 학습 데이터로 사용하고 있지만, 본 논문은 영상뿐 아니라 공간정보 특성을 포함하는 다양한 학습 데이터(multimodal training data)를 향상된 영역기반 합성곱 신경망(R-CNN)인 Detectron2 모델 학습에 사용하여 객체를 분할하고 건물을 탐지하는 것이 목적이다. 이를 위하여 적외선 항공영상과 라이다 데이터의 내재된 객체의 윤곽 및 통계적 질감정보인 Haralick feature와 같은 여러 특성을 추출하였다. DL 모델의 학습 성능은 데이터의 수량과 특성뿐 아니라 융합방법에 의해 좌우된다. 초기융합(early fusion)과 후기융합(late fusion)의 혼용방식인 하이브리드 융합(hybrid fusion)을 적용한 결과 33%의 건물을 추가적으로 탐지 할 수 있다. 이와 같은 실험 결과는 서로 다른 특성 데이터의 복합적 학습과 융합에 의한 상호보완적 효과를 입증하였다고 판단된다.

MODIS 및 Landsat 위성영상의 다중 해상도 자료 융합 기반 토지 피복 분류의 사례 연구 (A Case Study of Land-cover Classification Based on Multi-resolution Data Fusion of MODIS and Landsat Satellite Images)

  • 김예슬
    • 대한원격탐사학회지
    • /
    • 제38권6_1호
    • /
    • pp.1035-1046
    • /
    • 2022
  • 이 연구에서는 토지 피복 분류를 위한 다중 해상도 자료 융합의 적용성을 평가하였다. 여기서 다중 해상도 자료 융합 모델로는 spatial time-series geostatistical deconvolution/fusion model (STGDFM)을 적용하였다. 연구 지역은 미국 Iowa 주의 일부 농경 지역으로 선정하였으며, 대상 지역의 규모를 고려해 다중 해상도 자료 융합의 입력 자료로 Moderate Resolution Imaging Spectroradiometer (MODIS) 및 Landsat 영상을 사용하였다. 이를 바탕으로 STGDFM 적용해 Landsat 영상이 결측된 시기에서 가상의 Landsat 영상을 생성하였다. 그리고 획득한 Landsat 영상과 함께 STGDFM의 융합 결과를 입력 자료로 사용해 토지 피복 분류를 수행하였다. 특히 다중 해상도 자료 융합의 적용성 평가를 위해 획득한 Landsat 영상만을 이용한 분류 결과와 Landsat 영상 및 융합 결과를 모두 이용한 분류 결과를 비교 평가하였다. 그 결과, Landsat 영상만을 이용한 분류 결과에서는 대상 지역의 주요 토지 피복인 옥수수와 콩 재배지에서 혼재 양상이 두드러지게 나타났다. 또한 건초 및 곡물 지역과 초지 지역 등 식생 피복 간의 혼재 양상도 큰 것으로 나타났다. 반면 Landsat 영상 및 융합 결과를 이용한 분류 결과에서는 옥수수와 콩 재배지의 혼재 양상과 식생 피복 간의 혼재 양상이 크게 완화되었다. 이러한 영향으로 Landsat 영상 및 융합 결과를 이용한 분류 결과에서 분류 정확도가 약 20%p 향상되었다. 이는 STGDFM을 통해 MODIS 영상이 갖는 시계열 분광 정보를 융합 결과에 반영하면서 Landsat 영상의 결측을 보완할 수 있었고, 이러한 시계열 분광 정보가 분류 과정에 결합되면서 오분류를 크게 줄일 수 있었던 것으로 판단된다. 본 연구 결과를 통해 토지 피복 분류에 다중 해상도 자료 융합이 효과적으로 적용될 수 있음을 확인하였다.

Geohazard Monitoring with Space and Geophysical Technology - An Introduction to the KJRS 21(1) Special Issue-

  • Kim Jeong Woo;Jeon Jeong-Soo;Lee Youn Soo
    • 대한원격탐사학회지
    • /
    • 제21권1호
    • /
    • pp.3-13
    • /
    • 2005
  • National Research Lab Project 'Optimal Data Fusion of Geophysical and Geodetic Measurements for Geological Hazards Monitoring and Prediction' supported by Korea Ministry of Science and Technology is briefly described. The research focused on the geohazard analysis with geophysical and geodetic instruments such as superconducting gravimeter, seismometer, magnetometer, GPS, and Synthetic Aperture Radar. The aim of the NRL research is to verify the causes of geological hazards through optimal fusion of various observational data in three phases: surface data fusion using geodetic measurements; subsurface data fusion using geophysical measurements; and, finally fusion of both geodetic and geophysical data. The NRL hosted a special session 'Geohazard Monitoring with Space and Geophysical Technology' during the International Symposium on Remote Sensing in 2004 to discuss the current topics, challenges and possible directions in the geohazard research. Here, we briefly describe the special session papers and their relationships to the theme of the special session. The fusion of satellite and ground geophysical and geodetic data gives us new insight on the monitoring and prediction of the geological hazard.

도심의 정밀 모니터링을 위한 LiDAR 자료와 고해상영상의 융합 (the fusion of LiDAR Data and high resolution Image for the Precise Monitoring in Urban Areas)

  • 강준묵;강영미;이형석
    • 한국측량학회:학술대회논문집
    • /
    • 한국측량학회 2004년도 춘계학술발표회논문집
    • /
    • pp.383-388
    • /
    • 2004
  • The fusion of a different kind sensor is fusion of the obtained data by the respective independent technology. This is a important technology for the construction of 3D spatial information. particularly, information is variously realized by the fusion of LiDAR and mobile scanning system and digital map, fusion of LiDAR data and high resolution, LiDAR etc. This study is to generate union DEM and digital ortho image by the fusion of LiDAR data and high resolution image and monitor precisely topology, building, trees etc in urban areas using the union DEM and digital ortho image. using only the LiDAR data has some problems because it needs manual linearization and subjective reconstruction.

  • PDF

비동기 이종 센서를 이용한 데이터 융합기반 근거리 표적 추적기법 (Short Range Target Tracking Based on Data Fusion Method Using Asynchronous Dissimilar Sensors)

  • 이의혁
    • 전자공학회논문지
    • /
    • 제49권9호
    • /
    • pp.335-343
    • /
    • 2012
  • 본 논문은 근거리에서 접근하는 표적에 대한 레이더와 열영상의 관측데이터를 기반으로 정보융합을 수행하여 표적을 추적하는 알고리즘을 기술하고 있다. 일반적으로 칼만필터를 이용한 추적 융합 방법은 동기화된 레이더 및 열영상의 데이터를 근간으로 하고 있으며, 비동기적으로 동작하는 실제 시스템에 적용하기에는 많은 제한사항을 가지고 있다. 제안된 알고리즘에서의 중점사항은 동기화되어 있지 않은 서로 다른 두 센서인 레이더와 열영상의 관측데이터가 입력되었을 때 레이더의 거리정보와 추적상태벡터를 이용하여 관측값의 시간차이를 보상하여 관측치 융합 후 추적을 수행하는 것이다. 제안된 알고리즘의 성능평가를 위해 기존의 궤적기반 정보융합방법 및 측정치 융합기법과 성능을 비교하여 제시한다.

FusionScan: accurate prediction of fusion genes from RNA-Seq data

  • Kim, Pora;Jang, Ye Eun;Lee, Sanghyuk
    • Genomics & Informatics
    • /
    • 제17권3호
    • /
    • pp.26.1-26.12
    • /
    • 2019
  • Identification of fusion gene is of prominent importance in cancer research field because of their potential as carcinogenic drivers. RNA sequencing (RNA-Seq) data have been the most useful source for identification of fusion transcripts. Although a number of algorithms have been developed thus far, most programs produce too many false-positives, thus making experimental confirmation almost impossible. We still lack a reliable program that achieves high precision with reasonable recall rate. Here, we present FusionScan, a highly optimized tool for predicting fusion transcripts from RNA-Seq data. We specifically search for split reads composed of intact exons at the fusion boundaries. Using 269 known fusion cases as the reference, we have implemented various mapping and filtering strategies to remove false-positives without discarding genuine fusions. In the performance test using three cell line datasets with validated fusion cases (NCI-H660, K562, and MCF-7), FusionScan outperformed other existing programs by a considerable margin, achieving the precision and recall rates of 60% and 79%, respectively. Simulation test also demonstrated that FusionScan recovered most of true positives without producing an overwhelming number of false-positives regardless of sequencing depth and read length. The computation time was comparable to other leading tools. We also provide several curative means to help users investigate the details of fusion candidates easily. We believe that FusionScan would be a reliable, efficient and convenient program for detecting fusion transcripts that meet the requirements in the clinical and experimental community. FusionScan is freely available at http://fusionscan.ewha.ac.kr/.

Robust Hierarchical Data Fusion Scheme for Large-Scale Sensor Network

  • Song, Il Young
    • 센서학회지
    • /
    • 제26권1호
    • /
    • pp.1-6
    • /
    • 2017
  • The advanced driver assistant system (ADAS) requires the collection of a large amount of information including road conditions, environment, vehicle status, condition of the driver, and other useful data. In this regard, large-scale sensor networks can be an appropriate solution since they have been designed for this purpose. Recent advances in sensor network technology have enabled the management and monitoring of large-scale tasks such as the monitoring of road surface temperature on a highway. In this paper, we consider the estimation and fusion problems of the large-scale sensor networks used in the ADAS. Hierarchical fusion architecture is proposed for an arbitrary topology of the large-scale sensor network. A robust cluster estimator is proposed to achieve robustness of the network against outliers or failure of sensors. Lastly, a robust hierarchical data fusion scheme is proposed for the communication channel between the clusters and fusion center, considering the non-Gaussian channel noise, which is typical in communication systems.

다중 레이더 환경에서의 바이어스 오차 추정의 가관측성에 대한 연구와 정보 융합 (A Study of Observability Analysis and Data Fusion for Bias Estimation in a Multi-Radar System)

  • 원건희;송택렬;김다솔;서일환;황규환
    • 제어로봇시스템학회논문지
    • /
    • 제17권8호
    • /
    • pp.783-789
    • /
    • 2011
  • Target tracking performance improvement using multi-sensor data fusion is a challenging work. However, biases in the measurements should be removed before various data fusion techniques are applied. In this paper, a bias removing algorithm using measurement data from multi-radar tracking systems is proposed and evaluated by computer simulation. To predict bias estimation performance in various geometric relations between the radar systems and target, a system observability index is proposed and tested via computer simulation results. It is also studied that target tracking which utilizes multi-sensor data fusion with bias-removed measurements results in better performance.

A study on Fusion image Expressed in Hair collections - Focusing on Juno Hair's 2013-2022 collection

  • Jin Hyun Park;Hye Rroon Jang
    • International Journal of Advanced Culture Technology
    • /
    • 제11권4호
    • /
    • pp.202-209
    • /
    • 2023
  • In the 21st century, the dualistic worldview of the Cold War era collapsed and we entered an era of new creation and fusion. The fusion of different designs between East and West, the design activities of traditional clothing from the past, the use of new materials that are continuously being developed, and the mixing of unique items are being conducted in various fields. However, research is being conducted by combining fusion characteristics with hair. In addition, the period is short and the amount of research is small. Therefore, this study analyzed hairstyles of fusion images shown in hair collection using data of Juno Hair collection from 2013 to 2022 as analysis data and examined types of fusion images shown in the work of folk images, mixed images, and future images. Oriented images were divided into three categories and analyzed. In this study, we added Results of such research can be used not only as data for predicting future fashion trends, but also as basic data for exploring new design developments. In future research, it is expected that convergent research will be conducted, such as analyzing fusion images from an integrated perspective.