• 제목/요약/키워드: image fusion

검색결과 877건 처리시간 0.038초

이중스케일분해기와 미세정보 보존모델에 기반한 다중 모드 의료영상 융합연구 (Multimodal Medical Image Fusion Based on Two-Scale Decomposer and Detail Preservation Model)

  • 장영매;이효종
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2021년도 추계학술발표대회
    • /
    • pp.655-658
    • /
    • 2021
  • The purpose of multimodal medical image fusion (MMIF) is to integrate images of different modes with different details into a result image with rich information, which is convenient for doctors to accurately diagnose and treat the diseased tissues of patients. Encouraged by this purpose, this paper proposes a novel method based on a two-scale decomposer and detail preservation model. The first step is to use the two-scale decomposer to decompose the source image into the energy layers and structure layers, which have the characteristic of detail preservation. And then, structure tensor operator and max-abs are combined to fuse the structure layers. The detail preservation model is proposed for the fusion of the energy layers, which greatly improves the image performance. The fused image is achieved by summing up the two fused sub-images obtained by the above fusion rules. Experiments demonstrate that the proposed method has superior performance compared with the state-of-the-art fusion methods.

TSDnet: 적외선과 가시광선 이미지 융합을 위한 규모-3 밀도망 (TSDnet: Three-scale Dense Network for Infrared and Visible Image Fusion)

  • 장영매;이효종
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2022년도 추계학술발표대회
    • /
    • pp.656-658
    • /
    • 2022
  • The purpose of infrared and visible image fusion is to integrate images of different modes with different details into a result image with rich information, which is convenient for high-level computer vision task. Considering many deep networks only work in a single scale, this paper proposes a novel image fusion based on three-scale dense network to preserve the content and key target features from the input images in the fused image. It comprises an encoder, a three-scale block, a fused strategy and a decoder, which can capture incredibly rich background details and prominent target details. The encoder is used to extract three-scale dense features from the source images for the initial image fusion. Then, a fusion strategy called l1-norm to fuse features of different scales. Finally, the fused image is reconstructed by decoding network. Compared with the existing methods, the proposed method can achieve state-of-the-art fusion performance in subjective observation.

Multimodal Medical Image Fusion Based on Sugeno's Intuitionistic Fuzzy Sets

  • Tirupal, Talari;Mohan, Bhuma Chandra;Kumar, Samayamantula Srinivas
    • ETRI Journal
    • /
    • 제39권2호
    • /
    • pp.173-180
    • /
    • 2017
  • Multimodal medical image fusion is the process of retrieving valuable information from medical images. The primary goal of medical image fusion is to combine several images obtained from various sources into a distinct image suitable for improved diagnosis. Complexity in medical images is higher, and many soft computing methods are applied by researchers to process them. Intuitionistic fuzzy sets are more appropriate for medical images because the images have many uncertainties. In this paper, a new method, based on Sugeno's intuitionistic fuzzy set (SIFS), is proposed. First, medical images are converted into Sugeno's intuitionistic fuzzy image (SIFI). An exponential intuitionistic fuzzy entropy calculates the optimum values of membership, non-membership, and hesitation degree functions. Then, the two SIFIs are disintegrated into image blocks for calculating the count of blackness and whiteness of the blocks. Finally, the fused image is rebuilt from the recombination of SIFI image blocks. The efficiency of the use of SIFS in multimodal medical image fusion is demonstrated on several pairs of images and the results are compared with existing studies in recent literature.

An Improved Remote Sensing Image Fusion Algorithm Based on IHS Transformation

  • Deng, Chao;Wang, Zhi-heng;Li, Xing-wang;Li, Hui-na;Cavalcante, Charles Casimiro
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권3호
    • /
    • pp.1633-1649
    • /
    • 2017
  • In remote sensing image processing, the traditional fusion algorithm is based on the Intensity-Hue-Saturation (IHS) transformation. This method does not take into account the texture or spectrum information, spatial resolution and statistical information of the photos adequately, which leads to spectrum distortion of the image. Although traditional solutions in such application combine manifold methods, the fusion procedure is rather complicated and not suitable for practical operation. In this paper, an improved IHS transformation fusion algorithm based on the local variance weighting scheme is proposed for remote sensing images. In our proposal, firstly, the local variance of the SPOT (which comes from French "Systeme Probatoire d'Observation dela Tarre" and means "earth observing system") image is calculated by using different sliding windows. The optimal window size is then selected with the images being normalized with the optimal window local variance. Secondly, the power exponent is chosen as the mapping function, and the local variance is used to obtain the weight of the I component and match SPOT images. Then we obtain the I' component with the weight, the I component and the matched SPOT images. Finally, the final fusion image is obtained by the inverse Intensity-Hue-Saturation transformation of the I', H and S components. The proposed algorithm has been tested and compared with some other image fusion methods well known in the literature. Simulation result indicates that the proposed algorithm could obtain a superior fused image based on quantitative fusion evaluation indices.

가변적 감마 계수를 이용한 노출융합기반 단일영상 HDR기법 (A HDR Algorithm for Single Image Based on Exposure Fusion Using Variable Gamma Coefficient)

  • 한규필
    • 한국멀티미디어학회논문지
    • /
    • 제24권8호
    • /
    • pp.1059-1067
    • /
    • 2021
  • In this paper, a HDR algorithm for a single image is proposed using the exposure fusion, that adaptively calculates gamma correction coefficients according to the image distribution. Since typical HDR methods should use at least three images with different exposure values at the same scene, the main problem was that they could not be applied at the single shot image. Thus, HDR enhancements based on a single image using tone mapping and histogram modifications were recently presented, but these created some location-specific noises due to improper corrections. Therefore, the proposed algorithm calculates proper gamma coefficients according to the distribution of the input image and generates different exposure images which are corrected by the dark and the bright region stretching. A HDR image reproduction controlling exposure fusion weights among the gamma corrected and the original pixels is presented. As the result, the proposed algorithm can reduce certain noises at both the flat and the edge areas and obtain subjectively superior image quality to that of conventional methods.

도심의 정밀 모니터링을 위한 LiDAR 자료와 고해상영상의 융합 (the fusion of LiDAR Data and high resolution Image for the Precise Monitoring in Urban Areas)

  • 강준묵;강영미;이형석
    • 한국측량학회:학술대회논문집
    • /
    • 한국측량학회 2004년도 춘계학술발표회논문집
    • /
    • pp.383-388
    • /
    • 2004
  • The fusion of a different kind sensor is fusion of the obtained data by the respective independent technology. This is a important technology for the construction of 3D spatial information. particularly, information is variously realized by the fusion of LiDAR and mobile scanning system and digital map, fusion of LiDAR data and high resolution, LiDAR etc. This study is to generate union DEM and digital ortho image by the fusion of LiDAR data and high resolution image and monitor precisely topology, building, trees etc in urban areas using the union DEM and digital ortho image. using only the LiDAR data has some problems because it needs manual linearization and subjective reconstruction.

  • PDF

SAR Image De-noising Based on Residual Image Fusion and Sparse Representation

  • Ma, Xiaole;Hu, Shaohai;Yang, Dongsheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권7호
    • /
    • pp.3620-3637
    • /
    • 2019
  • Since the birth of Synthetic Aperture Radar (SAR), it has been widely used in the military field and so on. However, the existence of speckle noise makes a good deal inconvenience for the subsequent image processing. The continuous development of sparse representation (SR) opens a new field for the speckle suppressing of SAR image. Although the SR de-noising may be effective, the over-smooth phenomenon still has bad influence on the integrity of the image information. In this paper, one novel SAR image de-noising method based on residual image fusion and sparse representation is proposed. Firstly we can get the similar block groups by the non-local similar block matching method (NLS-BM). Then SR de-noising based on the adaptive K-means singular value decomposition (K-SVD) is adopted to obtain the initial de-noised image and residual image. The residual image is processed by Shearlet transform (ST), and the corresponding de-noising methods are applied on it. Finally, in ST domain the low-frequency and high-frequency components of the initial de-noised and residual image are fused respectively by relevant fusion rules. The final de-noised image can be recovered by inverse ST. Experimental results show the proposed method can not only suppress the speckle effectively, but also save more details and other useful information of the original SAR image, which could provide more authentic and credible records for the follow-up image processing.

Multi-focus Image Fusion using Fully Convolutional Two-stream Network for Visual Sensors

  • Xu, Kaiping;Qin, Zheng;Wang, Guolong;Zhang, Huidi;Huang, Kai;Ye, Shuxiong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권5호
    • /
    • pp.2253-2272
    • /
    • 2018
  • We propose a deep learning method for multi-focus image fusion. Unlike most existing pixel-level fusion methods, either in spatial domain or in transform domain, our method directly learns an end-to-end fully convolutional two-stream network. The framework maps a pair of different focus images to a clean version, with a chain of convolutional layers, fusion layer and deconvolutional layers. Our deep fusion model has advantages of efficiency and robustness, yet demonstrates state-of-art fusion quality. We explore different parameter settings to achieve trade-offs between performance and speed. Moreover, the experiment results on our training dataset show that our network can achieve good performance with subjective visual perception and objective assessment metrics.

이미지 피라미드 기반의 다중 노출 영상 융합기법 단순화를 위한 가우시안 피라미드 최상층 근사화 (An Approximation of Gaussian Pyramid Top Layer for Simplification of Image Pyramid-based Multi Scale Exposure Fusion Algorithm)

  • 황태훈;김진헌
    • 한국멀티미디어학회논문지
    • /
    • 제22권10호
    • /
    • pp.1160-1167
    • /
    • 2019
  • Because of the dynamic range limitation of digital equipment, it is impossible to obtain dark and bright areas at the same time with one shot. In order to solve this problem, an exposure fusion technique for fusing a plurality of images photographed at different exposure amounts into one is being studied. Among them, Laplacian pyramid decomposition based fusion method can generate natural HDR image by fusing images of various scales. But this requires a lot of computation time. Therefore, in this paper, we propose an approximation technique that achieves similar performance and greatly shortens computation time. The concept of vanishing point image for approximation is introduced, and the validity of the proposed approach is verified by comparing the computation time with the resultant image.

2단계 분광혼합기법 기반의 하이퍼스펙트럴 영상융합 알고리즘 (Hyperspectral Image Fusion Algorithm Based on Two-Stage Spectral Unmixing Method)

  • 최재완;김대성;이병길;유기윤;김용일
    • 대한원격탐사학회지
    • /
    • 제22권4호
    • /
    • pp.295-304
    • /
    • 2006
  • 영상융합은 "특정 알고리즘의 사용을 통해 두 개 혹은 그 이상의 서로 다른 영상을 조합하여 새로운 영상을 만들어내는 것"을 뜻하며 원격탐사에서는 주로 낮은 공간해상도의 멀티스펙트럴 영상과 높은 공간해상도의 흑백영상을 융합하여 높은 공간해상도의 멀티스펙트럴 영상을 생성하는 것을 의미한다. 일반적으로 하이퍼스펙트럴 영상융합을 위해서는 기존의 멀티스펙트럴 영상융합 기법을 이용한 방법이나 분광혼합기법을 이용한 방법을 사용한다. 전자의 경우에는 분광정보가 손실될 가능성이 높으며, 후자의 경우는, endmember의 정보나 부가적인 데이터가 필요하고 결과 영상의 경우 공간적 정보가 상대적으로 부정확한 문제점을 보인다. 따라서 본 연구에서는 하이퍼스펙트럴 영상의 분광특성을 보존하기 위한 융합방법으로서 2단계 분광혼합기법을 사용한 영상융합 알고리즘을 제안하였으며 이를 실제 Hyperion, ALI 영상에 적용하여 평가하였다. 이를 통해 제안한 알고리즘에 의해서 융합된 결과가 PCA, GS 융합기법에 비해서 높은 공간, 분광 해상도를 유지할 수 있음을 보여주었다.