• 제목/요약/키워드: Visible-infrared image fusion

검색결과 29건 처리시간 0.031초

A Noisy Infrared and Visible Light Image Fusion Algorithm

  • Shen, Yu;Xiang, Keyun;Chen, Xiaopeng;Liu, Cheng
    • Journal of Information Processing Systems
    • /
    • 제17권5호
    • /
    • pp.1004-1019
    • /
    • 2021
  • To solve the problems of the low image contrast, fuzzy edge details and edge details missing in noisy image fusion, this study proposes a noisy infrared and visible light image fusion algorithm based on non-subsample contourlet transform (NSCT) and an improved bilateral filter, which uses NSCT to decompose an image into a low-frequency component and high-frequency component. High-frequency noise and edge information are mainly distributed in the high-frequency component, and the improved bilateral filtering method is used to process the high-frequency component of two images, filtering the noise of the images and calculating the image detail of the infrared image's high-frequency component. It can extract the edge details of the infrared image and visible image as much as possible by superimposing the high-frequency component of infrared image and visible image. At the same time, edge information is enhanced and the visual effect is clearer. For the fusion rule of low-frequency coefficient, the local area standard variance coefficient method is adopted. At last, we decompose the high- and low-frequency coefficient to obtain the fusion image according to the inverse transformation of NSCT. The fusion results show that the edge, contour, texture and other details are maintained and enhanced while the noise is filtered, and the fusion image with a clear edge is obtained. The algorithm could better filter noise and obtain clear fused images in noisy infrared and visible light image fusion.

Real-Time Visible-Infrared Image Fusion using Multi-Guided Filter

  • Jeong, Woojin;Han, Bok Gyu;Yang, Hyeon Seok;Moon, Young Shik
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권6호
    • /
    • pp.3092-3107
    • /
    • 2019
  • Visible-infrared image fusion is a process of synthesizing an infrared image and a visible image into a fused image. This process synthesizes the complementary advantages of both images. The infrared image is able to capture a target object in dark or foggy environments. However, the utility of the infrared image is hindered by the blurry appearance of objects. On the other hand, the visible image clearly shows an object under normal lighting conditions, but it is not ideal in dark or foggy environments. In this paper, we propose a multi-guided filter and a real-time image fusion method. The proposed multi-guided filter is a modification of the guided filter for multiple guidance images. Using this filter, we propose a real-time image fusion method. The speed of the proposed fusion method is much faster than that of conventional image fusion methods. In an experiment, we compare the proposed method and the conventional methods in terms of quantity, quality, fusing speed, and flickering artifacts. The proposed method synthesizes 57.93 frames per second for an image size of $320{\times}270$. Based on our experiments, we confirmed that the proposed method is able to perform real-time processing. In addition, the proposed method synthesizes flicker-free video.

Infrared and Visible Image Fusion Based on NSCT and Deep Learning

  • Feng, Xin
    • Journal of Information Processing Systems
    • /
    • 제14권6호
    • /
    • pp.1405-1419
    • /
    • 2018
  • An image fusion method is proposed on the basis of depth model segmentation to overcome the shortcomings of noise interference and artifacts caused by infrared and visible image fusion. Firstly, the deep Boltzmann machine is used to perform the priori learning of infrared and visible target and background contour, and the depth segmentation model of the contour is constructed. The Split Bregman iterative algorithm is employed to gain the optimal energy segmentation of infrared and visible image contours. Then, the nonsubsampled contourlet transform (NSCT) transform is taken to decompose the source image, and the corresponding rules are used to integrate the coefficients in the light of the segmented background contour. Finally, the NSCT inverse transform is used to reconstruct the fused image. The simulation results of MATLAB indicates that the proposed algorithm can obtain the fusion result of both target and background contours effectively, with a high contrast and noise suppression in subjective evaluation as well as great merits in objective quantitative indicators.

A Novel Image Dehazing Algorithm Based on Dual-tree Complex Wavelet Transform

  • Huang, Changxin;Li, Wei;Han, Songchen;Liang, Binbin;Cheng, Peng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권10호
    • /
    • pp.5039-5055
    • /
    • 2018
  • The quality of natural outdoor images captured by visible camera sensors is usually degraded by the haze present in the atmosphere. In this paper, a fast image dehazing method based on visible image and near-infrared fusion is proposed. In the proposed method, a visible and a near-infrared (NIR) image of the same scene is fused based on the dual-tree complex wavelet transform (DT-CWT) to generate a dehazed color image. The color of the fusion image is regulated through haze concentration estimated by dark channel prior (DCP). The experiment results demonstrate that the proposed method outperforms the conventional dehazing methods and effectively solves the color distortion problem in the dehazing process.

현저성과 분산을 이용한 적외선과 가시영상의 2단계 스케일 융합방법 (Two Scale Fusion Method of Infrared and Visible Images Using Saliency and Variance)

  • 김영춘;안상호
    • 한국멀티미디어학회논문지
    • /
    • 제19권12호
    • /
    • pp.1951-1959
    • /
    • 2016
  • In this paper, we propose a two-scale fusion method for infrared and visible images using saliency and variance. The images are separated into two scales respectively: a base layer of low frequency component and a detailed layer of high frequency component. Then, these are synthesized using weight. The saliencies and the variances of the images are used as the fusion weights for the two-scale images. The proposed method is tested on several image pairs, and its performance is evaluated quantitatively by using objective fusion metrics.

원적외선 영상의 열 정보를 고려한 가시광 영상 개선 방법 (Visible Image Enhancement Method Considering Thermal Information from Infrared Image)

  • 김선걸;강행봉
    • 방송공학회논문지
    • /
    • 제18권4호
    • /
    • pp.550-558
    • /
    • 2013
  • 가시광 영상과 원적외선 영상은 각각 질감 정보와 열 정보를 가지므로 서로 다른 정보를 표현한다. 그러므로 가시광 영상 개선을 위해 가시광 영상의 정보만을 이용하는 것보다 가시광 영상에서 존재하지 않는 원적외선 영상의 열 정보를 이용하는 것이 보다 좋은 결과를 얻을 수 있다. 본 논문에서는 원적외선 영상을 이용한 효과적인 가시광 영상 개선을 위해 가시광 영상에서 개선이 필요한 정도에 따라 가중치 맵을 만든다. 가중치 맵은 채도와 밝기를 이용하여 계산하며 원적외선 영상에서 열 정보를 고려하여 값을 조정한다. 마지막으로 조정된 가중치 맵을 이용하여 원적외선 영상의 정보와 가시광 영상의 정보를 융합함으로써 두 영상의 정보를 효과적으로 포함한 결과 영상을 생성한다. 실험결과에서는 가시광 영상에서 개선이 필요한 영역을 원적외선 영상 정보와의 융합으로 원본의 가시광 영상보다 향상된 결과를 보여준다.

Reflectance estimation for infrared and visible image fusion

  • Gu, Yan;Yang, Feng;Zhao, Weijun;Guo, Yiliang;Min, Chaobo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권8호
    • /
    • pp.2749-2763
    • /
    • 2021
  • The desirable result of infrared (IR) and visible (VIS) image fusion should have textural details from VIS images and salient targets from IR images. However, detail information in the dark regions of VIS image has low contrast and blurry edges, resulting in performance degradation in image fusion. To resolve the troubles of fuzzy details in dark regions of VIS image fusion, we have proposed a method of reflectance estimation for IR and VIS image fusion. In order to maintain and enhance details in these dark regions, dark region approximation (DRA) is proposed to optimize the Retinex model. With the improved Retinex model based on DRA, quasi-Newton method is adopted to estimate the reflectance of a VIS image. The final fusion outcome is obtained by fusing the DRA-based reflectance of VIS image with IR image. Our method could simultaneously retain the low visibility details in VIS images and the high contrast targets in IR images. Experiment statistic shows that compared to some advanced approaches, the proposed method has superiority on detail preservation and visual quality.

Perceptual Fusion of Infrared and Visible Image through Variational Multiscale with Guide Filtering

  • Feng, Xin;Hu, Kaiqun
    • Journal of Information Processing Systems
    • /
    • 제15권6호
    • /
    • pp.1296-1305
    • /
    • 2019
  • To solve the problem of poor noise suppression capability and frequent loss of edge contour and detailed information in current fusion methods, an infrared and visible light image fusion method based on variational multiscale decomposition is proposed. Firstly, the fused images are separately processed through variational multiscale decomposition to obtain texture components and structural components. The method of guided filter is used to carry out the fusion of the texture components of the fused image. In the structural component fusion, a method is proposed to measure the fused weights with phase consistency, sharpness, and brightness comprehensive information. Finally, the texture components of the two images are fused. The structure components are added to obtain the final fused image. The experimental results show that the proposed method displays very good noise robustness, and it also helps realize better fusion quality.

MOSAICFUSION: MERGING MODALITIES WITH PARTIAL DIFFERENTIAL EQUATION AND DISCRETE COSINE TRANSFORMATION

  • GARGI TRIVEDI;RAJESH SANGHAVI
    • Journal of Applied and Pure Mathematics
    • /
    • 제5권5_6호
    • /
    • pp.389-406
    • /
    • 2023
  • In the pursuit of enhancing image fusion techniques, this research presents a novel approach for fusing multimodal images, specifically infrared (IR) and visible (VIS) images, utilizing a combination of partial differential equations (PDE) and discrete cosine transformation (DCT). The proposed method seeks to leverage the thermal and structural information provided by IR imaging and the fine-grained details offered by VIS imaging create composite images that are superior in quality and informativeness. Through a meticulous fusion process, which involves PDE-guided fusion, DCT component selection, and weighted combination, the methodology aims to strike a balance that optimally preserves essential features and minimizes artifacts. Rigorous evaluations, both objective and subjective, are conducted to validate the effectiveness of the approach. This research contributes to the ongoing advancement of multimodal image fusion, addressing applications in fields like medical imaging, surveillance, and remote sensing, where the marriage of IR and VIS data is of paramount importance.

Effectiveness of Using the TIR Band in Landsat 8 Image Classification

  • Lee, Mi Hee;Lee, Soo Bong;Kim, Yongmin;Sa, Jiwon;Eo, Yang Dam
    • 한국측량학회지
    • /
    • 제33권3호
    • /
    • pp.203-209
    • /
    • 2015
  • This paper discusses the effectiveness of using Landsat 8 TIR (Thermal Infrared) band images to improve the accuracy of landuse/landcover classification of urban areas. According to classification results for the study area using diverse band combinations, the classification accuracy using an image fusion process in which the TIR band is added to the visible and near infrared band was improved by 4.0%, compared to that using a band combination that does not consider the TIR band. For urban area landuse/landcover classification in particular, the producer’s accuracy and user’s accuracy values were improved by 10.2% and 3.8%, respectively. When MLC (Maximum Likelihood Classification), which is commonly applied to remote sensing images, was used, the TIR band images helped obtain a higher discriminant analysis in landuse/landcover classification.