• 제목/요약/키워드: Information Fusion

검색결과 1,895건 처리시간 0.026초

상관계수를 이용하여 인식률을 향상시킨 rank-level fusion 방법 (Rank-level Fusion Method That Improves Recognition Rate by Using Correlation Coefficient)

  • 안정호;정재열;정익래
    • 정보보호학회논문지
    • /
    • 제29권5호
    • /
    • pp.1007-1017
    • /
    • 2019
  • 현재 대부분의 생체인증 시스템은 단일 생체정보를 이용하여 사용자를 인증하고 있는데, 이러한 방식은 노이즈로 인한 문제, 데이터에 대한 민감성 문제, 스푸핑, 인식률의 한계 등 많은 문제점들을 가지고 있다. 이를 해결하기 위한 방법 중 하나로 다중 생체정보를 이용하는 방법이 제시되고 있다. 다중 생체인증 시스템은 각각의 생체정보에 대해서 information fusion을 수행하여 새로운 정보를 생성한 뒤, 그 정보를 활용하여 사용자를 인증하는 방식이다. Information fusion 방법들 중에서 score-level fusion 방법을 보편적으로 많이 사용한다. 하지만 정규화 작업이 필요하다는 문제점을 갖고 있고, 데이터가 같아도 정규화 방법에 따라 인식률이 달라진다는 문제점을 갖고 있다. 이에 대한 대안으로 정규화 작업이 필요 없는 rank-level fusion 방법이 제시되고 있다. 하지만 기존의 rank-level fusion 방법들은 score-level fusion 방법보다 인식률이 낮다. 이러한 문제점을 해결하기 위해 상관계수를 이용하여 score-level fusion 방법보다 인식률이 높은 rank-level fusion 방법을 제안한다. 실험은 홍채정보(CASIA V3)와 얼굴정보(FERET V1)를 이용하여 기존의 존재하는 rank-level fusion 방법들의 인식률과 본 논문에서 제안하는 fusion 방법의 인식률을 비교하였다. 또한 score-level fusion 방법들과도 인식률을 비교하였다. 그 결과로 인식률이 약 0.3%에서 3.3%까지 향상되었다.

Traffic Flow Prediction with Spatio-Temporal Information Fusion using Graph Neural Networks

  • Huijuan Ding;Giseop Noh
    • International journal of advanced smart convergence
    • /
    • 제12권4호
    • /
    • pp.88-97
    • /
    • 2023
  • Traffic flow prediction is of great significance in urban planning and traffic management. As the complexity of urban traffic increases, existing prediction methods still face challenges, especially for the fusion of spatiotemporal information and the capture of long-term dependencies. This study aims to use the fusion model of graph neural network to solve the spatio-temporal information fusion problem in traffic flow prediction. We propose a new deep learning model Spatio-Temporal Information Fusion using Graph Neural Networks (STFGNN). We use GCN module, TCN module and LSTM module alternately to carry out spatiotemporal information fusion. GCN and multi-core TCN capture the temporal and spatial dependencies of traffic flow respectively, and LSTM connects multiple fusion modules to carry out spatiotemporal information fusion. In the experimental evaluation of real traffic flow data, STFGNN showed better performance than other models.

A Noisy Infrared and Visible Light Image Fusion Algorithm

  • Shen, Yu;Xiang, Keyun;Chen, Xiaopeng;Liu, Cheng
    • Journal of Information Processing Systems
    • /
    • 제17권5호
    • /
    • pp.1004-1019
    • /
    • 2021
  • To solve the problems of the low image contrast, fuzzy edge details and edge details missing in noisy image fusion, this study proposes a noisy infrared and visible light image fusion algorithm based on non-subsample contourlet transform (NSCT) and an improved bilateral filter, which uses NSCT to decompose an image into a low-frequency component and high-frequency component. High-frequency noise and edge information are mainly distributed in the high-frequency component, and the improved bilateral filtering method is used to process the high-frequency component of two images, filtering the noise of the images and calculating the image detail of the infrared image's high-frequency component. It can extract the edge details of the infrared image and visible image as much as possible by superimposing the high-frequency component of infrared image and visible image. At the same time, edge information is enhanced and the visual effect is clearer. For the fusion rule of low-frequency coefficient, the local area standard variance coefficient method is adopted. At last, we decompose the high- and low-frequency coefficient to obtain the fusion image according to the inverse transformation of NSCT. The fusion results show that the edge, contour, texture and other details are maintained and enhanced while the noise is filtered, and the fusion image with a clear edge is obtained. The algorithm could better filter noise and obtain clear fused images in noisy infrared and visible light image fusion.

Fusion Techniques Comparison of GeoEye-1 Imagery

  • Kim, Yong-Hyun;Kim, Yong-Il;Kim, Youn-Soo
    • 대한원격탐사학회지
    • /
    • 제25권6호
    • /
    • pp.517-529
    • /
    • 2009
  • Many satellite image fusion techniques have been developed in order to produce a high resolution multispectral (MS) image by combining a high resolution panchromatic (PAN) image and a low resolution MS image. Heretofore, most high resolution image fusion techniques have used IKONOS and QuickBird images. Recently, GeoEye-1, offering the highest resolution of any commercial imaging system, was launched. In this study, we have experimented with GeoEye-1 images in order to evaluate which fusion algorithms are suitable for these images. This paper presents compares and evaluates the efficiency of five image fusion techniques, the $\grave{a}$ trous algorithm based additive wavelet transformation (AWT) fusion techniques, the Principal Component analysis (PCA) fusion technique, Gram-Schmidt (GS) spectral sharpening, Pansharp, and the Smoothing Filter based Intensity Modulation (SFIM) fusion technique, for the fusion of a GeoEye-1 image. The results of the experiment show that the AWT fusion techniques maintain more spatial detail of the PAN image and spectral information of the MS image than other image fusion techniques. Also, the Pansharp technique maintains information of the original PAN and MS images as well as the AWT fusion technique.

다중센서 기반 차선정보 시공간 융합기법 (Lane Information Fusion Scheme using Multiple Lane Sensors)

  • 이수목;박기광;서승우
    • 전자공학회논문지
    • /
    • 제52권12호
    • /
    • pp.142-149
    • /
    • 2015
  • 단일 카메라 센서를 기반으로 한 차선검출 시스템은 급격한 조도 변화, 열악한 기상환경 등에 취약하다. 이러한 단일 센서 시스템의 한계를 극복하기 위한 방안으로 센서 융합을 통해 성능 안정화를 도모할 수 있다. 하지만, 기존 센서 융합의 연구는 대부분 물체 및 차량을 대상으로 한 융합 모델에 국한되어 차용하기 어렵거나, 차선 센서의 다양한 신호 주기 및 인식범위에 대한 상이성을 고려하지 않은 경우가 대부분이었다. 따라서 본 연구에서는 다중센서의 상이성을 고려하여 차선 정보를 최적으로 융합하는 기법을 제안한다. 제안하는 융합 프레임워크는 센서 별 가변적인 신호처리 주기와 인식 신뢰 범위를 고려하므로 다양한 차선 센서 조합으로도 정교한 융합이 가능하다. 또한, 새로운 차선 예측 모델의 제안을 통해 간헐적으로 들어오는 차선정보를 세밀한 차선정보로 정밀하게 예측하여 다중주기 신호를 동기화한다. 조도환경이 열악한 환경에서의 실험과 정량적 평가를 통해, 제안하는 융합 시스템이 기존 단일 센서 대비 인식 성능이 개선됨을 검증한다.

Multi-modality image fusion via generalized Riesz-wavelet transformation

  • Jin, Bo;Jing, Zhongliang;Pan, Han
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권11호
    • /
    • pp.4118-4136
    • /
    • 2014
  • To preserve the spatial consistency of low-level features, generalized Riesz-wavelet transform (GRWT) is adopted for fusing multi-modality images. The proposed method can capture the directional image structure arbitrarily by exploiting a suitable parameterization fusion model and additional structural information. Its fusion patterns are controlled by a heuristic fusion model based on image phase and coherence features. It can explore and keep the structural information efficiently and consistently. A performance analysis of the proposed method applied to real-world images demonstrates that it is competitive with the state-of-art fusion methods, especially in combining structural information.

인식률을 향상시키는 효과적인 Rank-level fusion 방법 (A efficient Rank-level fusion method improving recognition rate)

  • 안정호;권태연;노건태;정익래
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2017년도 춘계학술발표대회
    • /
    • pp.312-314
    • /
    • 2017
  • 생체정보를 이용한 사용자 인증은 차세대 인증 방법으로서 기존의 인증 시스템에서 급진적으로 사용되고 있는 인증 방법이다. 현재 대부분의 생체인증 시스템은 단일 생체정보를 이용하고 있는데, 단일 생체인증 시스템은 노이즈로 인한 문제, 데이터의 질에 대한 문제, 인식률의 한계 등 많은 문제점들을 가지고 있다. 이를 해결하기 위한 방법으로 다중 생체정보를 이용하는 사용자 인증 방법이 있다. 다중 생체인증 시스템은 각각의 정보에 대한 information fusion을 적용하여 새로운 정보를 생성한 뒤, 그 정보를 기반으로 사용자를 인증한다. information fusion 방법들 중에서도 Rank-level fusion 방법은 표준화 작업이 필요하고 높은 계산 복잡도를 갖는 Score-level fusion방법의 대안으로 선택되고 있다. 따라서 본 논문에서는 기존 방법보다 정확도가 높게 향상된 Rank-level fusion 방법을 제안한다. 또한, 본 논문에서 제안하는 방법은 낮은 정확도를 갖는 matcher를 사용하더라도 정확도를 향상시킬 수 있음을 실험을 통해 보이고자 한다.

Incomplete Cholesky Decomposition based Kernel Cross Modal Factor Analysis for Audiovisual Continuous Dimensional Emotion Recognition

  • Li, Xia;Lu, Guanming;Yan, Jingjie;Li, Haibo;Zhang, Zhengyan;Sun, Ning;Xie, Shipeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권2호
    • /
    • pp.810-831
    • /
    • 2019
  • Recently, continuous dimensional emotion recognition from audiovisual clues has attracted increasing attention in both theory and in practice. The large amount of data involved in the recognition processing decreases the efficiency of most bimodal information fusion algorithms. A novel algorithm, namely the incomplete Cholesky decomposition based kernel cross factor analysis (ICDKCFA), is presented and employed for continuous dimensional audiovisual emotion recognition, in this paper. After the ICDKCFA feature transformation, two basic fusion strategies, namely feature-level fusion and decision-level fusion, are explored to combine the transformed visual and audio features for emotion recognition. Finally, extensive experiments are conducted to evaluate the ICDKCFA approach on the AVEC 2016 Multimodal Affect Recognition Sub-Challenge dataset. The experimental results show that the ICDKCFA method has a higher speed than the original kernel cross factor analysis with the comparable performance. Moreover, the ICDKCFA method achieves a better performance than other common information fusion methods, such as the Canonical correlation analysis, kernel canonical correlation analysis and cross-modal factor analysis based fusion methods.

이동 물체를 추적하기 위한 감각 운동 융합 시스템 설계 (The Sensory-Motor Fusion System for Object Tracking)

  • 이상희;위재우;이종호
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제52권3호
    • /
    • pp.181-187
    • /
    • 2003
  • For the moving objects with environmental sensors such as object tracking moving robot with audio and video sensors, environmental information acquired from sensors keep changing according to movements of objects. In such case, due to lack of adaptability and system complexity, conventional control schemes show limitations on control performance, and therefore, sensory-motor systems, which can intuitively respond to various types of environmental information, are desirable. And also, to improve the system robustness, it is desirable to fuse more than two types of sensory information simultaneously. In this paper, based on Braitenberg's model, we propose a sensory-motor based fusion system, which can trace the moving objects adaptively to environmental changes. With the nature of direct connecting structure, sensory-motor based fusion system can control each motor simultaneously, and the neural networks are used to fuse information from various types of sensors. And also, even if the system receives noisy information from one sensor, the system still robustly works with information from other sensors which compensates the noisy information through sensor fusion. In order to examine the performance, sensory-motor based fusion model is applied to object-tracking four-foot robot equipped with audio and video sensors. The experimental results show that the sensory-motor based fusion system can tract moving objects robustly with simpler control mechanism than model-based control approaches.

시전달 측정치 융합에 기반한 압축필트 (Compression Filters Based on Time-Propagated Measurement Fusion)

  • 이형근;이장규
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제51권9호
    • /
    • pp.389-401
    • /
    • 2002
  • To complement the conventional fusion methodologies of state fusion and measurement fusion, a time-propagated measurement fusion methodology is proposed. Various aspects of common process noise are investigated regarding information preservation. Based on time-propagated measurement fusion methodology, four compression filters are derived. The derived compression filters are efficient in asynchronous sensor fusion and fault detection since they maintain correct statistical information. A new batch Kalman recursion is proposed to show the optimality under the time-propagated measurement fusion methodology. A simple simulation result evaluates estimation efficiency and characteristic.