Browse > Article
http://dx.doi.org/10.3745/JIPS.04.0096

Infrared and Visible Image Fusion Based on NSCT and Deep Learning  

Feng, Xin (College of Mechanical Engineering and Key Laboratory of Manufacturing Equipment Mechanism Design and Control, Chongqing Technology and Business University)
Publication Information
Journal of Information Processing Systems / v.14, no.6, 2018 , pp. 1405-1419 More about this Journal
Abstract
An image fusion method is proposed on the basis of depth model segmentation to overcome the shortcomings of noise interference and artifacts caused by infrared and visible image fusion. Firstly, the deep Boltzmann machine is used to perform the priori learning of infrared and visible target and background contour, and the depth segmentation model of the contour is constructed. The Split Bregman iterative algorithm is employed to gain the optimal energy segmentation of infrared and visible image contours. Then, the nonsubsampled contourlet transform (NSCT) transform is taken to decompose the source image, and the corresponding rules are used to integrate the coefficients in the light of the segmented background contour. Finally, the NSCT inverse transform is used to reconstruct the fused image. The simulation results of MATLAB indicates that the proposed algorithm can obtain the fusion result of both target and background contours effectively, with a high contrast and noise suppression in subjective evaluation as well as great merits in objective quantitative indicators.
Keywords
Boltzmann Machine; Depth Model; Image Fusion; Split Bregman Iterative Algorithm;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Z. Wang, Y. Ma, and J. Gu, "Multi-focus image fusion using PCNN," Pattern Recognition, vol. 43, no. 6, pp. 2003-2016, 2010.   DOI
2 D. P. Bavirisetti and R. Dhuli, "Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen-Loeve transform," IEEE Sensors Journal, vol. 16, no. 1, pp. 203-209, 2016.   DOI
3 V. Bhateja, H. Patel, A. Krishn, A. Sahu, and A. Lay-Ekuakille, "Multimodal medical image sensor fusion framework using cascade of wavelet and contourlet transform domains," IEEE Sensors Journal, vol. 15, no. 12, pp. 6783-6790, 2015.   DOI
4 Y. Yang, S. Tong, S. Huang, and P. Lin, "Multifocus image fusion based on NSCT and focused area detection," IEEE Sensors Journal, vol. 15, no. 5, pp. 2824-2838, 2015.   DOI
5 H. Zheng, C. Zheng, X. Yan, and H. Chen, "Visible and infrared image fusion algorithm based on shearlet transform," Chinese Journal of Scientific Instrument, vol. 33, no. 7, pp. 1613-1619, 2012.
6 J. Wang, J. Peng, and G. He, X. Feng, and K. Yan, "Fusion method for visible and infrared images based on non-subsampled contourlet transform and sparse representation," Acta Armamentarii, vol. 34, no. 7, pp. 815-820, 2013.   DOI
7 T. Gan, S. T. Feng, S. P. Nie, and Z. Q. Zhu, "Image fusion algorithm based on block-DCT in wavelet domain," Acta Physica Sinica, vol. 60, no. 11, article no. 114205, 2011.
8 W. Kong, L. Zhang, and Y. Lei, "Novel fusion method for visible light and infrared images based on NSST-SF-PCNN," Infrared Physics & Technology, vol. 65, pp. 103-112, 2014.   DOI
9 Y. Shen, X. Feng, and Y. Hou, "Infrared and visible images fusion based on Tetrolet transform," Spectroscopy and Spectral Analysis, vol. 33, no. 6, pp. 1506-1511, 2013.   DOI
10 G. Dahl, A. R. Mohamed, and G. E. Hinton, "Phone recognition with the mean-covariance restricted Boltzmann machine," Advances in Neural Information Processing Systems, vol. 23, pp. 469-477, 2010.
11 R. Salakhutdinov and H. Larochelle, "An efficient learning procedure for deep Boltzmann machines," Neural Computation, vol. 24, pp. 1967-2006, 2012.   DOI
12 A. L. Da Cunha, J. Zhou, and M. N. Do, "The nonsubsampled contourlet transform: theory, design, and applications," IEEE Transactions on Image Processing, vol. 15, no. 10, pp. 3089-3101, 2006.   DOI
13 S. M. Ali Eslami, N. Heess, and J. Winn, "The shape Boltzmann machine: a strong model of object shape," in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, 2012, pp. 406-413.
14 L. J. Latecki, R. Lakamper, and T. Eckhardt, "Shape descriptors for non-rigid shapes with a single closed contour," in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Hilton Head Island, SC, 2000, pp. 424-429.
15 W. Li, Q. Li, W. Gong, and S. Tang, "Total variation blind deconvolution employing split Bregman iteration," Journal of Visual Communication and Image Representation, vol. 23, no. 3, pp. 409-417, 2012.   DOI
16 D. Brunet, E. R. Vrscay, and Z. Wang, "On the mathematical properties of the structural similarity index," IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 1488-1499, 2012   DOI
17 Q. Zhang and B. L. Guo, "Fusion of infrared and visible light images based on nonsubsampled contourlet transform," Journal of Infrared and Millimeter Waves (Chinese Edition), vol. 26, no. 6, pp. 476-480. 2007.   DOI
18 TNO UN Camp image [Online]. Available: http://www.imagefusion.org/.ftpquota.
19 X. Li and S. Y. Qin, "Efficient fusion for infrared and visible images based on compressive sensing principle," IET Image Processing, vol. 5, no. 2, pp. 141-147, 2011.   DOI