Browse > Article
http://dx.doi.org/10.3745/JIPS.04.0144

Perceptual Fusion of Infrared and Visible Image through Variational Multiscale with Guide Filtering  

Feng, Xin (College of Mechanical Engineering, Chongqing Technology and Business University)
Hu, Kaiqun (College of Mechanical Engineering, Chongqing Technology and Business University)
Publication Information
Journal of Information Processing Systems / v.15, no.6, 2019 , pp. 1296-1305 More about this Journal
Abstract
To solve the problem of poor noise suppression capability and frequent loss of edge contour and detailed information in current fusion methods, an infrared and visible light image fusion method based on variational multiscale decomposition is proposed. Firstly, the fused images are separately processed through variational multiscale decomposition to obtain texture components and structural components. The method of guided filter is used to carry out the fusion of the texture components of the fused image. In the structural component fusion, a method is proposed to measure the fused weights with phase consistency, sharpness, and brightness comprehensive information. Finally, the texture components of the two images are fused. The structure components are added to obtain the final fused image. The experimental results show that the proposed method displays very good noise robustness, and it also helps realize better fusion quality.
Keywords
Image Fusion; Guided Filter; Phase Consistency; Variational Multiscale Decomposition;
Citations & Related Records
연도 인용수 순위
  • Reference
1 J. Ma, Y. Ma, and C. Li, "Infrared and visible image fusion methods and applications: a survey," Information Fusion, vol.45, pp. 153-178, 2019.   DOI
2 F. Meng, M. Song, B. Guo, R. Shi, and D. Shan, "Image fusion based on object region detection and non-subsampled contourlet transform," Computers& Electrical Engineering, vol. 62, pp. 375-383, 2017.   DOI
3 P. Zhu, X. Ma, and Z. Huang, "Fusion of infrared-visible images using improved multi-scale top-hat transform and suitable fusion rules," Infrared Physics & Technology, vol. 81, pp. 282-295, 2017.   DOI
4 Y. Liu, X. Chen, H. Peng, and Z. Wang, "Multi-focus image fusion with a deep convolutional neural network," Information Fusion, vol. 36, pp. 191-207, 2017.   DOI
5 Q. Zhang and B. L. Guo, "Multifocus image fusion using the nonsubsampled contourlet transform," Signal Processing, vol. 89, no. 7, pp. 1334-1346, 2009.   DOI
6 B. Zhang, X. Lu, H. Pei, H. Liu, Y. Zhao, and W. Zhou, "Multi-focus image fusion algorithm based on focused region extraction," Neurocomputing, vol. 174, pp. 733-748, 2016.   DOI
7 J. Ma, C. Chen, C. Li, and J. Huang, "Infrared and visible image fusion via gradient transfer and total variation minimization," Information Fusion, vol. 31, pp. 100-109, 2016.   DOI
8 K. He, J. Sun, and X. Tang, "Guided image filtering," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6, pp. 1397-1409, 2013.   DOI
9 B. Yang and S. Li, "Multifocus image fusion and restoration with sparse representation," IEEE Transactions on Instrumentation and Measurement, vol. 59, no. 4, pp. 884-892, 2010.   DOI
10 Q. G. Miao, C. Shi, P. F. Xu, M. Yang, and Y. B. Shi, "A novel algorithm of image fusion using shearlets," Optics Communications, vol. 284, no. 6, pp. 1540-1547, 2011.   DOI