• Title/Summary/Keyword: image fusion

Search Result 877, Processing Time 0.031 seconds

Perceptual Fusion of Infrared and Visible Image through Variational Multiscale with Guide Filtering

  • Feng, Xin;Hu, Kaiqun
    • Journal of Information Processing Systems
    • /
    • v.15 no.6
    • /
    • pp.1296-1305
    • /
    • 2019
  • To solve the problem of poor noise suppression capability and frequent loss of edge contour and detailed information in current fusion methods, an infrared and visible light image fusion method based on variational multiscale decomposition is proposed. Firstly, the fused images are separately processed through variational multiscale decomposition to obtain texture components and structural components. The method of guided filter is used to carry out the fusion of the texture components of the fused image. In the structural component fusion, a method is proposed to measure the fused weights with phase consistency, sharpness, and brightness comprehensive information. Finally, the texture components of the two images are fused. The structure components are added to obtain the final fused image. The experimental results show that the proposed method displays very good noise robustness, and it also helps realize better fusion quality.

FUSESHARP: A MULTI-IMAGE FOCUS FUSION METHOD USING DISCRETE WAVELET TRANSFORM AND UNSHARP MASKING

  • GARGI TRIVEDI;RAJESH SANGHAVI
    • Journal of applied mathematics & informatics
    • /
    • v.41 no.5
    • /
    • pp.1115-1128
    • /
    • 2023
  • In this paper, a novel hybrid method for multi-focus image fusion is proposed. The method combines the advantages of wavelet transform-based methods and focus-measure-based methods to achieve an improved fusion result. The input images are first decomposed into different frequency sub-bands using the discrete wavelet transform (DWT). The focus measure of each sub-band is then calculated using the Laplacian of Gaussian (LoG) operator, and the sub-band with the highest focus measure is selected as the focused sub-band. The focused sub-band is sharpened using an unsharp masking filter to preserve the details in the focused part of the image.Finally, the sharpened focused sub-bands from all input images are fused using the maximum intensity fusion method to preserve the important information from all focus images. The proposed method has been evaluated using standard multi focus image fusion datasets and has shown promising results compared to existing methods.

Research on the Multi-Focus Image Fusion Method Based on the Lifting Stationary Wavelet Transform

  • Hu, Kaiqun;Feng, Xin
    • Journal of Information Processing Systems
    • /
    • v.14 no.5
    • /
    • pp.1293-1300
    • /
    • 2018
  • For the disadvantages of multi-scale geometric analysis methods such as loss of definition and complex selection of rules in image fusion, an improved multi-focus image fusion method is proposed. First, the initial fused image is quickly obtained based on the lifting stationary wavelet transform, and a simple normalized cut is performed on the initial fused image to obtain different segmented regions. Then, the original image is subjected to NSCT transformation and the absolute value of the high frequency component coefficient in each segmented region is calculated. At last, the region with the largest absolute value is selected as the postfusion region, and the fused multi-focus image is obtained by traversing each segment region. Numerical experiments show that the proposed algorithm can not only simplify the selection of fusion rules, but also overcome loss of definition and has validity.

A Study on the Improvement of Image Fusion Accuracy Using Smoothing Filter-based Replacement Method (SFR기법을 이용한 영상 융합의 정확도 향상에 관한 연구)

  • Yun Kong-Hyun
    • Spatial Information Research
    • /
    • v.14 no.1 s.36
    • /
    • pp.85-94
    • /
    • 2006
  • Image fusion techniques are widely used to integrate a lower spatial resolution multispectral image with a higher spatial resolution panchromatic image. However, the existing techniques either cannot avoid distorting the image spectral properties or involve complicated and time-consuming decomposition and reconstruction processing in the case of wavelet transform-based fusion. In this study a simple spectral preserve fusion technique: the Smoothing Filter-based Replacement(SFR) is proposed based on a simplified solar radiation and land surface reflection model. By using a ratio between a higher resolution image and its low pass filtered (with a smoothing filter) image, spatial details can be injected to a co-registered lower resolution multispectral image minimizing its spectral properties and contrast. The technique can be applied to improve spatial resolution for either colour composites or individual bands. The fidelity to spectral property and the spatial quality of SFM are convincingly demonstrated by an image fusion experiment using IKONOS panchromatic and multispectral images. The visual evaluation and statistical analysis compared with other image fusion techniques confirmed that SFR is a better fusion technique for preserving spectral information.

  • PDF

A multisource image fusion method for multimodal pig-body feature detection

  • Zhong, Zhen;Wang, Minjuan;Gao, Wanlin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.11
    • /
    • pp.4395-4412
    • /
    • 2020
  • The multisource image fusion has become an active topic in the last few years owing to its higher segmentation rate. To enhance the accuracy of multimodal pig-body feature segmentation, a multisource image fusion method was employed. Nevertheless, the conventional multisource image fusion methods can not extract superior contrast and abundant details of fused image. To superior segment shape feature and detect temperature feature, a new multisource image fusion method was presented and entitled as NSST-GF-IPCNN. Firstly, the multisource images were resolved into a range of multiscale and multidirectional subbands by Nonsubsampled Shearlet Transform (NSST). Then, to superior describe fine-scale texture and edge information, even-symmetrical Gabor filter and Improved Pulse Coupled Neural Network (IPCNN) were used to fuse low and high-frequency subbands, respectively. Next, the fused coefficients were reconstructed into a fusion image using inverse NSST. Finally, the shape feature was extracted using automatic threshold algorithm and optimized using morphological operation. Nevertheless, the highest temperature of pig-body was gained in view of segmentation results. Experiments revealed that the presented fusion algorithm was able to realize 2.102-4.066% higher average accuracy rate than the traditional algorithms and also enhanced efficiency.

Infrared and Visible Image Fusion Based on NSCT and Deep Learning

  • Feng, Xin
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1405-1419
    • /
    • 2018
  • An image fusion method is proposed on the basis of depth model segmentation to overcome the shortcomings of noise interference and artifacts caused by infrared and visible image fusion. Firstly, the deep Boltzmann machine is used to perform the priori learning of infrared and visible target and background contour, and the depth segmentation model of the contour is constructed. The Split Bregman iterative algorithm is employed to gain the optimal energy segmentation of infrared and visible image contours. Then, the nonsubsampled contourlet transform (NSCT) transform is taken to decompose the source image, and the corresponding rules are used to integrate the coefficients in the light of the segmented background contour. Finally, the NSCT inverse transform is used to reconstruct the fused image. The simulation results of MATLAB indicates that the proposed algorithm can obtain the fusion result of both target and background contours effectively, with a high contrast and noise suppression in subjective evaluation as well as great merits in objective quantitative indicators.

Image Fusion of High Resolution SAR and Optical Image Using High Frequency Information (고해상도 SAR와 광학영상의 고주파 정보를 이용한 다중센서 융합)

  • Byun, Young-Gi;Chae, Tae-Byeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.1
    • /
    • pp.75-86
    • /
    • 2012
  • Synthetic Aperture Radar(SAR) imaging system is independent of solar illumination and weather conditions; however, SAR image is difficult to interpret as compared with optical images. It has been increased interest in multi-sensor fusion technique which can improve the interpretability of $SAR^{\circ\circ}$ images by fusing the spectral information from multispectral(MS) image. In this paper, a multi-sensor fusion method based on high-frequency extraction process using Fast Fourier Transform(FFT) and outlier elimination process is proposed, which maintain the spectral content of the original MS image while retaining the spatial detail of the high-resolution SAR image. We used TerraSAR-X which is constructed on the same X-band SAR system as KOMPSAT-5 and KOMPSAT-2 MS image as the test data set to evaluate the proposed method. In order to evaluate the efficiency of the proposed method, the fusion result was compared visually and quantitatively with the result obtained using existing fusion algorithms. The evaluation results showed that the proposed image fusion method achieved successful results in the fusion of SAR and MS image compared with the existing fusion algorithms.

A Study of Fusion Image System and Simulation based on Mutual Information (상호정보량에 의한 이미지 융합시스템 및 시뮬레이션에 관한 연구)

  • Kim, Yonggil;Kim, Chul;Moon, Kyungil
    • Journal of The Korean Association of Information Education
    • /
    • v.19 no.1
    • /
    • pp.139-148
    • /
    • 2015
  • The purpose of image fusion is to combine the relevant information from a set of images into a single image, where the resultant fused image will be more informative and complete than any of the input images. Image fusion techniques can improve the quality and increase the application of these data important applications of the fusion of images include medical imaging, remote sensing, and robotics. In this paper, we suggest a new method to generate a fusion image using the close relation of image features obtained through maximum entropy threshold and mutual information. This method represents a good image registration in case of using a blurring image than other image fusion methods.

Image Fusion Based on Statistical Hypothesis Test Using Wavelet Transform (웨이블렛 변환을 이용한 통계적 가설검정에 의한 영상융합)

  • Park, Min-Joon;Kwon, Min-Jun;Kim, Gi-Hun;Shim, Han-Seul;Lim, Dong-Hoon
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.4
    • /
    • pp.695-708
    • /
    • 2011
  • Image fusion is the process of combining multiple images of the same scene into a single fused image with application to many fields, such as remote sensing, computer vision, robotics, medical imaging and military affairs. The widely used image fusion rules that use wavelet transform have been based on a simple comparison with the activity measures of local windows such as mean and standard deviation. In this case, information features from the original images are excluded in the fusion image and distorted fusion images are obtained for noisy images. In this paper, we propose the use of a nonparametric squared ranks test on the quality of variance for two samples in order to overcome the influence of the noise and guarantee the homogeneity of the fused image. We evaluate the method both quantitatively and qualitatively for image fusion as well as compare it to some existing fusion methods. Experimental results indicate that the proposed method is effective and provides satisfactory fusion results.

Potential for Image Fusion Quality Improvement through Shadow Effects Correction (그림자효과 보정을 통한 영상융합 품질 향상 가능성)

  • 손홍규;윤공현
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2003.10a
    • /
    • pp.397-402
    • /
    • 2003
  • This study is aimed to improve the quality of image fusion results through shadow effects correction. For this, shadow effects correction algorithm is proposed and visual comparisons have been made to estimate the quality of image fusion results. The following four steps have been performed to improve the image fusion qualify First, the shadow regions of satellite image are precisely located. Subsequently, segmentation of context regions is manually implemented for accurate correction. Next step, to calculate correction factor we compared the context region with the same non-shadow context region. Finally, image fusion is implemented using collected images. The result presented here helps to accurately extract and interpret geo-spatial information from satellite imagery.

  • PDF