• Title/Summary/Keyword: Image fusion

Search Result 878, Processing Time 0.034 seconds

Multimodal Medical Image Fusion Based on Two-Scale Decomposer and Detail Preservation Model (이중스케일분해기와 미세정보 보존모델에 기반한 다중 모드 의료영상 융합연구)

  • Zhang, Yingmei;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.655-658
    • /
    • 2021
  • The purpose of multimodal medical image fusion (MMIF) is to integrate images of different modes with different details into a result image with rich information, which is convenient for doctors to accurately diagnose and treat the diseased tissues of patients. Encouraged by this purpose, this paper proposes a novel method based on a two-scale decomposer and detail preservation model. The first step is to use the two-scale decomposer to decompose the source image into the energy layers and structure layers, which have the characteristic of detail preservation. And then, structure tensor operator and max-abs are combined to fuse the structure layers. The detail preservation model is proposed for the fusion of the energy layers, which greatly improves the image performance. The fused image is achieved by summing up the two fused sub-images obtained by the above fusion rules. Experiments demonstrate that the proposed method has superior performance compared with the state-of-the-art fusion methods.

TSDnet: Three-scale Dense Network for Infrared and Visible Image Fusion (TSDnet: 적외선과 가시광선 이미지 융합을 위한 규모-3 밀도망)

  • Zhang, Yingmei;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.656-658
    • /
    • 2022
  • The purpose of infrared and visible image fusion is to integrate images of different modes with different details into a result image with rich information, which is convenient for high-level computer vision task. Considering many deep networks only work in a single scale, this paper proposes a novel image fusion based on three-scale dense network to preserve the content and key target features from the input images in the fused image. It comprises an encoder, a three-scale block, a fused strategy and a decoder, which can capture incredibly rich background details and prominent target details. The encoder is used to extract three-scale dense features from the source images for the initial image fusion. Then, a fusion strategy called l1-norm to fuse features of different scales. Finally, the fused image is reconstructed by decoding network. Compared with the existing methods, the proposed method can achieve state-of-the-art fusion performance in subjective observation.

Multimodal Medical Image Fusion Based on Sugeno's Intuitionistic Fuzzy Sets

  • Tirupal, Talari;Mohan, Bhuma Chandra;Kumar, Samayamantula Srinivas
    • ETRI Journal
    • /
    • v.39 no.2
    • /
    • pp.173-180
    • /
    • 2017
  • Multimodal medical image fusion is the process of retrieving valuable information from medical images. The primary goal of medical image fusion is to combine several images obtained from various sources into a distinct image suitable for improved diagnosis. Complexity in medical images is higher, and many soft computing methods are applied by researchers to process them. Intuitionistic fuzzy sets are more appropriate for medical images because the images have many uncertainties. In this paper, a new method, based on Sugeno's intuitionistic fuzzy set (SIFS), is proposed. First, medical images are converted into Sugeno's intuitionistic fuzzy image (SIFI). An exponential intuitionistic fuzzy entropy calculates the optimum values of membership, non-membership, and hesitation degree functions. Then, the two SIFIs are disintegrated into image blocks for calculating the count of blackness and whiteness of the blocks. Finally, the fused image is rebuilt from the recombination of SIFI image blocks. The efficiency of the use of SIFS in multimodal medical image fusion is demonstrated on several pairs of images and the results are compared with existing studies in recent literature.

An Improved Remote Sensing Image Fusion Algorithm Based on IHS Transformation

  • Deng, Chao;Wang, Zhi-heng;Li, Xing-wang;Li, Hui-na;Cavalcante, Charles Casimiro
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.3
    • /
    • pp.1633-1649
    • /
    • 2017
  • In remote sensing image processing, the traditional fusion algorithm is based on the Intensity-Hue-Saturation (IHS) transformation. This method does not take into account the texture or spectrum information, spatial resolution and statistical information of the photos adequately, which leads to spectrum distortion of the image. Although traditional solutions in such application combine manifold methods, the fusion procedure is rather complicated and not suitable for practical operation. In this paper, an improved IHS transformation fusion algorithm based on the local variance weighting scheme is proposed for remote sensing images. In our proposal, firstly, the local variance of the SPOT (which comes from French "Systeme Probatoire d'Observation dela Tarre" and means "earth observing system") image is calculated by using different sliding windows. The optimal window size is then selected with the images being normalized with the optimal window local variance. Secondly, the power exponent is chosen as the mapping function, and the local variance is used to obtain the weight of the I component and match SPOT images. Then we obtain the I' component with the weight, the I component and the matched SPOT images. Finally, the final fusion image is obtained by the inverse Intensity-Hue-Saturation transformation of the I', H and S components. The proposed algorithm has been tested and compared with some other image fusion methods well known in the literature. Simulation result indicates that the proposed algorithm could obtain a superior fused image based on quantitative fusion evaluation indices.

A HDR Algorithm for Single Image Based on Exposure Fusion Using Variable Gamma Coefficient (가변적 감마 계수를 이용한 노출융합기반 단일영상 HDR기법)

  • Han, Kyu-Phil
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.8
    • /
    • pp.1059-1067
    • /
    • 2021
  • In this paper, a HDR algorithm for a single image is proposed using the exposure fusion, that adaptively calculates gamma correction coefficients according to the image distribution. Since typical HDR methods should use at least three images with different exposure values at the same scene, the main problem was that they could not be applied at the single shot image. Thus, HDR enhancements based on a single image using tone mapping and histogram modifications were recently presented, but these created some location-specific noises due to improper corrections. Therefore, the proposed algorithm calculates proper gamma coefficients according to the distribution of the input image and generates different exposure images which are corrected by the dark and the bright region stretching. A HDR image reproduction controlling exposure fusion weights among the gamma corrected and the original pixels is presented. As the result, the proposed algorithm can reduce certain noises at both the flat and the edge areas and obtain subjectively superior image quality to that of conventional methods.

the fusion of LiDAR Data and high resolution Image for the Precise Monitoring in Urban Areas (도심의 정밀 모니터링을 위한 LiDAR 자료와 고해상영상의 융합)

  • 강준묵;강영미;이형석
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2004.04a
    • /
    • pp.383-388
    • /
    • 2004
  • The fusion of a different kind sensor is fusion of the obtained data by the respective independent technology. This is a important technology for the construction of 3D spatial information. particularly, information is variously realized by the fusion of LiDAR and mobile scanning system and digital map, fusion of LiDAR data and high resolution, LiDAR etc. This study is to generate union DEM and digital ortho image by the fusion of LiDAR data and high resolution image and monitor precisely topology, building, trees etc in urban areas using the union DEM and digital ortho image. using only the LiDAR data has some problems because it needs manual linearization and subjective reconstruction.

  • PDF

Multi-focus Image Fusion using Fully Convolutional Two-stream Network for Visual Sensors

  • Xu, Kaiping;Qin, Zheng;Wang, Guolong;Zhang, Huidi;Huang, Kai;Ye, Shuxiong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.5
    • /
    • pp.2253-2272
    • /
    • 2018
  • We propose a deep learning method for multi-focus image fusion. Unlike most existing pixel-level fusion methods, either in spatial domain or in transform domain, our method directly learns an end-to-end fully convolutional two-stream network. The framework maps a pair of different focus images to a clean version, with a chain of convolutional layers, fusion layer and deconvolutional layers. Our deep fusion model has advantages of efficiency and robustness, yet demonstrates state-of-art fusion quality. We explore different parameter settings to achieve trade-offs between performance and speed. Moreover, the experiment results on our training dataset show that our network can achieve good performance with subjective visual perception and objective assessment metrics.

SAR Image De-noising Based on Residual Image Fusion and Sparse Representation

  • Ma, Xiaole;Hu, Shaohai;Yang, Dongsheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.7
    • /
    • pp.3620-3637
    • /
    • 2019
  • Since the birth of Synthetic Aperture Radar (SAR), it has been widely used in the military field and so on. However, the existence of speckle noise makes a good deal inconvenience for the subsequent image processing. The continuous development of sparse representation (SR) opens a new field for the speckle suppressing of SAR image. Although the SR de-noising may be effective, the over-smooth phenomenon still has bad influence on the integrity of the image information. In this paper, one novel SAR image de-noising method based on residual image fusion and sparse representation is proposed. Firstly we can get the similar block groups by the non-local similar block matching method (NLS-BM). Then SR de-noising based on the adaptive K-means singular value decomposition (K-SVD) is adopted to obtain the initial de-noised image and residual image. The residual image is processed by Shearlet transform (ST), and the corresponding de-noising methods are applied on it. Finally, in ST domain the low-frequency and high-frequency components of the initial de-noised and residual image are fused respectively by relevant fusion rules. The final de-noised image can be recovered by inverse ST. Experimental results show the proposed method can not only suppress the speckle effectively, but also save more details and other useful information of the original SAR image, which could provide more authentic and credible records for the follow-up image processing.

An Approximation of Gaussian Pyramid Top Layer for Simplification of Image Pyramid-based Multi Scale Exposure Fusion Algorithm (이미지 피라미드 기반의 다중 노출 영상 융합기법 단순화를 위한 가우시안 피라미드 최상층 근사화)

  • Hwang, Tae Hun;Kim, Jin Heon
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.10
    • /
    • pp.1160-1167
    • /
    • 2019
  • Because of the dynamic range limitation of digital equipment, it is impossible to obtain dark and bright areas at the same time with one shot. In order to solve this problem, an exposure fusion technique for fusing a plurality of images photographed at different exposure amounts into one is being studied. Among them, Laplacian pyramid decomposition based fusion method can generate natural HDR image by fusing images of various scales. But this requires a lot of computation time. Therefore, in this paper, we propose an approximation technique that achieves similar performance and greatly shortens computation time. The concept of vanishing point image for approximation is introduced, and the validity of the proposed approach is verified by comparing the computation time with the resultant image.

Hyperspectral Image Fusion Algorithm Based on Two-Stage Spectral Unmixing Method (2단계 분광혼합기법 기반의 하이퍼스펙트럴 영상융합 알고리즘)

  • Choi, Jae-Wan;Kim, Dae-Sung;Lee, Byoung-Kil;Yu, Ki-Yun;Kim, Yong-Il
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.4
    • /
    • pp.295-304
    • /
    • 2006
  • Image fusion is defined as making new image by merging two or more images using special algorithms. In case of remote sensing, it means fusing multispectral low-resolution remotely sensed image with panchromatic high-resolution image. Generally, hyperspectral image fusion is accomplished by utilizing fusion technique of multispectral imagery or spectral unmixing model. But, the former may distort spectral information and the latter needs endmember data or additional data, and has a problem with not preserving spatial information well. This study proposes a new algorithm based on two stage spectral unmixing model for preserving hyperspectral image's spectral information. The proposed fusion technique is implemented and tested using Hyperion and ALI images. it is shown to work well on maintaining more spatial/spectral information than the PCA/GS fusion algorithms.