• 제목/요약/키워드: image-level fusion

검색결과 84건 처리시간 0.041초

Finger Vein Recognition based on Matching Score-Level Fusion of Gabor Features

  • Lu, Yu;Yoon, Sook;Park, Dong Sun
    • 한국통신학회논문지
    • /
    • 제38A권2호
    • /
    • pp.174-182
    • /
    • 2013
  • Most methods for fusion-based finger vein recognition were to fuse different features or matching scores from more than one trait to improve performance. To overcome the shortcomings of "the curse of dimensionality" and additional running time in feature extraction, in this paper, we propose a finger vein recognition technology based on matching score-level fusion of a single trait. To enhance the quality of finger vein image, the contrast-limited adaptive histogram equalization (CLAHE) method is utilized and it improves the local contrast of normalized image after ROI detection. Gabor features are then extracted from eight channels based on a bank of Gabor filters. Instead of using the features for the recognition directly, we analyze the contributions of Gabor feature from each channel and apply a weighted matching score-level fusion rule to get the final matching score, which will be used for the last recognition. Experimental results demonstrate the CLAHE method is effective to enhance the finger vein image quality and the proposed matching score-level fusion shows better recognition performance.

Multi-modality image fusion via generalized Riesz-wavelet transformation

  • Jin, Bo;Jing, Zhongliang;Pan, Han
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권11호
    • /
    • pp.4118-4136
    • /
    • 2014
  • To preserve the spatial consistency of low-level features, generalized Riesz-wavelet transform (GRWT) is adopted for fusing multi-modality images. The proposed method can capture the directional image structure arbitrarily by exploiting a suitable parameterization fusion model and additional structural information. Its fusion patterns are controlled by a heuristic fusion model based on image phase and coherence features. It can explore and keep the structural information efficiently and consistently. A performance analysis of the proposed method applied to real-world images demonstrates that it is competitive with the state-of-art fusion methods, especially in combining structural information.

Feasibility study of improved median filtering in PET/MR fusion images with parallel imaging using generalized autocalibrating partially parallel acquisition

  • Chanrok Park;Jae-Young Kim;Chang-Hyeon An;Youngjin Lee
    • Nuclear Engineering and Technology
    • /
    • 제55권1호
    • /
    • pp.222-228
    • /
    • 2023
  • This study aimed to analyze the applicability of the improved median filter in positron emission tomography (PET)/magnetic resonance (MR) fusion images based on parallel imaging using generalized autocalibrating partially parallel acquisition (GRAPPA). In this study, a PET/MR fusion imaging system based on a 3.0T magnetic field and 18F radioisotope were used. An improved median filter that can set a mask of the median value more efficiently than before was modeled and applied to the acquired image. As quantitative evaluation parameters of the noise level, the contrast to noise ratio (CNR) and coefficient of variation (COV) were calculated. Additionally, no-reference-based evaluation parameters were used to analyze the overall image quality. We confirmed that the CNR and COV values of the PET/MR fusion images to which the improved median filter was applied improved by approximately 3.32 and 2.19 times on average, respectively, compared to the noisy image. In addition, the no-reference-based evaluation results showed a similar trend for the noise-level results. In conclusion, we demonstrated that it can be supplemented by using an improved median filter, which suggests the problem of image quality degradation of PET/MR fusion images that shortens scan time using GRAPPA.

Development and Implementation of Multi-source Remote Sensing Imagery Fusion Based on PCI Geomatica

  • Yu, ZENG;Jixian, ZHANG;Qin, YAN;Pinglin, QIAO
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.1334-1336
    • /
    • 2003
  • On the basis of comprehensive analysis and summarization of the image fusion algorithms provided by PCI Geomatica software, deficiencies in image fusion processing functions of this software are put forwarded in this paper. This limitation could be improved by further developing PCI Geomatica on the user’ side. Five effective algorithms could be added into PCI Geomatica. In this paper, the detailed description of how to customize and further develop PCI Geomatica by using Microsoft Visual C++ 6.0, PCI SDK Kit and GDB technique is also given. Through this way, the remote sensing imagery fusion functions of PCI Geomatica software can be extended.

  • PDF

Multi-focus Image Fusion using Fully Convolutional Two-stream Network for Visual Sensors

  • Xu, Kaiping;Qin, Zheng;Wang, Guolong;Zhang, Huidi;Huang, Kai;Ye, Shuxiong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권5호
    • /
    • pp.2253-2272
    • /
    • 2018
  • We propose a deep learning method for multi-focus image fusion. Unlike most existing pixel-level fusion methods, either in spatial domain or in transform domain, our method directly learns an end-to-end fully convolutional two-stream network. The framework maps a pair of different focus images to a clean version, with a chain of convolutional layers, fusion layer and deconvolutional layers. Our deep fusion model has advantages of efficiency and robustness, yet demonstrates state-of-art fusion quality. We explore different parameter settings to achieve trade-offs between performance and speed. Moreover, the experiment results on our training dataset show that our network can achieve good performance with subjective visual perception and objective assessment metrics.

퍼지 클래스 벡터를 이용하는 다중센서 융합에 의한 무감독 영상분류 (Unsupervised Image Classification through Multisensor Fusion using Fuzzy Class Vector)

  • 이상훈
    • 대한원격탐사학회지
    • /
    • 제19권4호
    • /
    • pp.329-339
    • /
    • 2003
  • 본 연구에서는 무감독 영상분류를 위하여 특성이 다른 센서로 수집된 영상들에 대한 의사결정 수준의 영상 융합기법을 제안하였다. 제안된 기법은 공간 확장 분할에 근거한 무감독 계층군집 영상분류기법을 개개의 센서에서 수집된 영상에 독립적으로 적용한 후 그 결과로 생성되는 분할지역의 퍼지 클래스 벡터(fuzzy class vector)를 이용하여 각 센서의 분류 결과를 융합한다. 퍼지 클래스벡터는 분할지역이 각 클래스에 속할 확률을 표시하는 지시(indicator) 벡터로 간주되며 기대 최대화 (EM: Expected Maximization) 추정 법에 의해 관련 변수의 최대 우도 추정치가 반복적으로 계산되어진다. 본 연구에서는 같은 특성의 센서 혹은 밴드 별로 분할과 분류를 수행한 후 분할지역의 분류결과를 퍼지 클래스 벡터를 이용하여 합성하는 접근법을 사용하고 있으므로 일반적으로 다중센서의 영상의 분류기법에 사용하는 화소수준의 영상융합기법에서처럼 서로 다른 센서로부터 수집된 영상의 화소간의 공간적 일치에 대한 높은 정확도를 요구하지 않는다. 본 연구는 한반도 전라북도 북서지역에서 관측된 다중분광 SPOT 영상자료와 AIRSAR 영상자료에 적용한 결과 제안된 영상 융합기법에 의한 피복 분류는 확장 벡터의 접근법에 의한 영상 융합보다 서로 다른 센서로부터 얻어지는 정보를 더욱 적합하게 융합한다는 것을 보여주고 있다.

그레디언트 및 분산을 이용한 웨이블릿 기반의 광학 및 레이더 영상 융합 (Wavelet-based Fusion of Optical and Radar Image using Gradient and Variance)

  • 예철수
    • 대한원격탐사학회지
    • /
    • 제26권5호
    • /
    • pp.581-591
    • /
    • 2010
  • 본 연구에서는 주파수 및 공간 도메인 상에서 선호 분석에 장점이 있는 웨이블릿 기반의 영상 융합 알고리듬을 제안하였다. 개발된 알고리듬은 레이더 영상 신호와 광학 영상 신호의 상대적인 크기를 비교하여 상대적으로 신호 크기가 큰 경우에는 레이더 영상 신호를 융합 영상에 할당하고 크기가 작은 경우에는 레이더 영상 신호와 광학 영상 선호의 가중치 합으로 융합 영상 신호를 결정한다. 사용되는 융합 규칙은 두 영상 신호의 상대적인 신호 비(ratio) 영상 그레디언트, 로컬 영역의 분산 특성을 동시에 고려한다. Ikonos 위성 영상과 TerraSAR-X 위성 영상을 이용한 실험에서 상대적으로 신호 크기가 큰 레이더 신호 만을 융합 영상에 할당하는 기존 방법에 비해 entropy, image clarity, spatial frequency, speckle index 측면에서 우수한 융합 결과를 얻었다.

TSDnet: 적외선과 가시광선 이미지 융합을 위한 규모-3 밀도망 (TSDnet: Three-scale Dense Network for Infrared and Visible Image Fusion)

  • 장영매;이효종
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2022년도 추계학술발표대회
    • /
    • pp.656-658
    • /
    • 2022
  • The purpose of infrared and visible image fusion is to integrate images of different modes with different details into a result image with rich information, which is convenient for high-level computer vision task. Considering many deep networks only work in a single scale, this paper proposes a novel image fusion based on three-scale dense network to preserve the content and key target features from the input images in the fused image. It comprises an encoder, a three-scale block, a fused strategy and a decoder, which can capture incredibly rich background details and prominent target details. The encoder is used to extract three-scale dense features from the source images for the initial image fusion. Then, a fusion strategy called l1-norm to fuse features of different scales. Finally, the fused image is reconstructed by decoding network. Compared with the existing methods, the proposed method can achieve state-of-the-art fusion performance in subjective observation.

AUTOMATIC BUILDING EXTRACTION BASED ON MULTI-SOURCE DATA FUSION

  • Lu, Yi Hui;Trinder, John
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.248-250
    • /
    • 2003
  • An automatic approach and strategy for extracting building information from aerial images using combined image analysis and interpretation techniques is described in this paper. A dense DSM is obtained by stereo image matching. Multi-band classification, DSM, texture segmentation and Normalised Difference Vegetation Index (NDVI) are used to reveal building interest areas. Then, based on the derived approximate building areas, a shape modelling algorithm based on the level set formulation of curve and surface motion has been used to precisely delineate the building boundaries. Data fusion, based on the Dempster-Shafer technique, is used to interpret simultaneously knowledge from several data sources of the same region, to find the intersection of propositions on extracted information derived from several datasets, together with their associated probabilities. A number of test areas, which include buildings with different sizes, shape and roof colour have been investigated. The tests are encouraging and demonstrate that the system is effective for building extraction, and the determination of more accurate elevations of the terrain surface.

  • PDF

웨이블릿 퓨전에 의한 딥러닝 색상화의 성능 향상 (High-performance of Deep learning Colorization With Wavelet fusion)

  • 김영백;최현;조중휘
    • 대한임베디드공학회논문지
    • /
    • 제13권6호
    • /
    • pp.313-319
    • /
    • 2018
  • We propose a post-processing algorithm to improve the quality of the RGB image generated by deep learning based colorization from the gray-scale image of an infrared camera. Wavelet fusion is used to generate a new luminance component of the RGB image luminance component from the deep learning model and the luminance component of the infrared camera. PSNR is increased for all experimental images by applying the proposed algorithm to RGB images generated by two deep learning models of SegNet and DCGAN. For the SegNet model, the average PSNR is improved by 1.3906dB at level 1 of the Haar wavelet method. For the DCGAN model, PSNR is improved 0.0759dB on the average at level 5 of the Daubechies wavelet method. It is also confirmed that the edge components are emphasized by the post-processing and the visibility is improved.