• Title/Summary/Keyword: Visible-infrared image fusion

Search Result 29, Processing Time 0.054 seconds

TSDnet: Three-scale Dense Network for Infrared and Visible Image Fusion (TSDnet: 적외선과 가시광선 이미지 융합을 위한 규모-3 밀도망)

  • Zhang, Yingmei;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.656-658
    • /
    • 2022
  • The purpose of infrared and visible image fusion is to integrate images of different modes with different details into a result image with rich information, which is convenient for high-level computer vision task. Considering many deep networks only work in a single scale, this paper proposes a novel image fusion based on three-scale dense network to preserve the content and key target features from the input images in the fused image. It comprises an encoder, a three-scale block, a fused strategy and a decoder, which can capture incredibly rich background details and prominent target details. The encoder is used to extract three-scale dense features from the source images for the initial image fusion. Then, a fusion strategy called l1-norm to fuse features of different scales. Finally, the fused image is reconstructed by decoding network. Compared with the existing methods, the proposed method can achieve state-of-the-art fusion performance in subjective observation.

Infrared and visible image fusion based on Laplacian pyramid and generative adversarial network

  • Wang, Juan;Ke, Cong;Wu, Minghu;Liu, Min;Zeng, Chunyan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.5
    • /
    • pp.1761-1777
    • /
    • 2021
  • An image with infrared features and visible details is obtained by processing infrared and visible images. In this paper, a fusion method based on Laplacian pyramid and generative adversarial network is proposed to obtain high quality fusion images, termed as Laplacian-GAN. Firstly, the base and detail layers are obtained by decomposing the source images. Secondly, we utilize the Laplacian pyramid-based method to fuse these base layers to obtain more information of the base layer. Thirdly, the detail part is fused by a generative adversarial network. In addition, generative adversarial network avoids the manual design complicated fusion rules. Finally, the fused base layer and fused detail layer are reconstructed to obtain the fused image. Experimental results demonstrate that the proposed method can obtain state-of-the-art fusion performance in both visual quality and objective assessment. In terms of visual observation, the fusion image obtained by Laplacian-GAN algorithm in this paper is clearer in detail. At the same time, in the six metrics of MI, AG, EI, MS_SSIM, Qabf and SCD, the algorithm presented in this paper has improved by 0.62%, 7.10%, 14.53%, 12.18%, 34.33% and 12.23%, respectively, compared with the best of the other three algorithms.

Infrared Image Sharpness Enhancement Method Using Super-resolution Based on Adaptive Dynamic Range Coding and Fusion with Visible Image (적외선 영상 선명도 개선을 위한 ADRC 기반 초고해상도 기법 및 가시광 영상과의 융합 기법)

  • Kim, Yong Jun;Song, Byung Cheol
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.11
    • /
    • pp.73-81
    • /
    • 2016
  • In general, infrared images have less sharpness and image details than visible images. So, the prior image upscaling methods are not effective in the infrared images. In order to solve this problem, this paper proposes an algorithm which initially up-scales an input infrared (IR) image by using adaptive dynamic range encoding (ADRC)-based super-resolution (SR) method, and then fuses the result with the corresponding visible images. The proposed algorithm consists of a up-scaling phase and a fusion phase. First, an input IR image is up-scaled by the proposed ADRC-based SR algorithm. In the dictionary learning stage of this up-scaling phase, so-called 'pre-emphasis' processing is applied to training-purpose high-resolution images, hence better sharpness is achieved. In the following fusion phase, high-frequency information is extracted from the visible image corresponding to the IR image, and it is adaptively weighted according to the complexity of the IR image. Finally, a up-scaled IR image is obtained by adding the processed high-frequency information to the up-scaled IR image. The experimental results show than the proposed algorithm provides better results than the state-of-the-art SR, i.e., anchored neighborhood regression (A+) algorithm. For example, in terms of just noticeable blur (JNB), the proposed algorithm shows higher value by 0.2184 than the A+. Also, the proposed algorithm outperforms the previous works even in terms of subjective visual quality.

Design and Analysis of Coaxial Optical System for Improvement of Image Fusion of Visible and Far-infrared Dual Cameras (가시광선과 원적외선 듀얼카메라의 영상 정합도 향상을 위한 동축광학계 설계 및 분석)

  • Kyu Lee Kang;Young Il Kim;Byeong Soo Son;Jin Yeong Park
    • Korean Journal of Optics and Photonics
    • /
    • v.34 no.3
    • /
    • pp.106-116
    • /
    • 2023
  • In this paper, we designed a coaxial dual camera incorporating two optical systems-one for the visible rays and the other for far-infrared ones-with the aim of capturing images in both wavelength ranges. The far-infrared system, which uses an uncooled detector, has a sensor array of 640×480 pixels. The visible ray system has 1,945×1,097 pixels. The coaxial dual optical system was designed using a hot mirror beam splitter to minimize heat transfer caused by infrared rays in the visible ray optical system. The optimization process revealed that the final version of the dual camera system reached more than 90% of the fusion performance between two separate images from dual systems. Multiple rigorous testing processes confirmed that the coaxial dual camera we designed demonstrates meaningful design efficiency and improved image conformity degree compared to existing dual cameras.

Development of a Sensor Fusion System for Visible Ray and Infrared (적외선 및 가시광선의 센서 융합시스템의 개발)

  • Kim, Dae-Won;Kim, Mo-Gon;Nam, Dong-Hwan;Jung, Soon-Ki;Lim, Soon-Jae
    • Journal of Sensor Science and Technology
    • /
    • v.9 no.1
    • /
    • pp.44-50
    • /
    • 2000
  • Every object emits some energy from its surface. The emission energy forms surface heat distribution which we can capture by using an infrared thermal imager. The infrared thermal image may include valuable information regarding to the subsurface anomaly of the object. Since a thermal image reflects surface clutter and subsurface anomaly, we have difficulty in extracting the information on the subsurface anomaly only with thermal images taken under a wavelength. Thus, we use visible wavelength images of the object surface to remove exterior clutter. We, therefore in this paper, visualize the infrared image for overlaying it with a visible wavelength image. First, we make an interpolated image from two ordinary images taken from both sides of an infrared sensor. Next, we overlay the intermediate image with an infrared image taken from the infrared camera. The technique suggested in this paper can be utilized for analyzing the infrared images on non-destructive inspection against disaster and for safety.

  • PDF

Design of an observer-based decentralized fuzzy controller for discrete-time interconnected fuzzy systems (얼굴영상과 예측한 열 적외선 텍스처의 융합에 의한 얼굴 인식)

  • Kong, Seong G.
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.5
    • /
    • pp.437-443
    • /
    • 2015
  • This paper presents face recognition based on the fusion of visible image and thermal infrared (IR) texture estimated from the face image in the visible spectrum. The proposed face recognition scheme uses a multi- layer neural network to estimate thermal texture from visible imagery. In the training process, a set of visible and thermal IR image pairs are used to determine the parameters of the neural network to learn a complex mapping from a visible image to its thermal texture in the low-dimensional feature space. The trained neural network estimates the principal components of the thermal texture corresponding to the input visible image. Extensive experiments on face recognition were performed using two popular face recognition algorithms, Eigenfaces and Fisherfaces for NIST/Equinox database for benchmarking. The fusion of visible image and thermal IR texture demonstrated improved face recognition accuracies over conventional face recognition in terms of receiver operating characteristics (ROC) as well as first matching performances.

Image Fusion using RGB and Near Infrared Image (컬러 영상과 근적외선 영상을 이용한 영상 융합)

  • Kil, Taeho;Cho, Nam Ik
    • Journal of Broadcast Engineering
    • /
    • v.21 no.4
    • /
    • pp.515-524
    • /
    • 2016
  • Infrared (IR) wavelength is out of visible range and thus usually cut by hot filters in general commercial cameras. However, some information from the near-IR (NIR) range is known to improve the overall visibility of scene in many cases. For example when there is fog or haze in the scene, NIR image has clearer visibility than visible image because of its stronger penetration property. In this paper, we propose an algorithm for fusing the RGB and NIR images to obtain the enhanced images of the outdoor scenes. First, we construct a weight map by comparing the contrast of the RGB and NIR images, and then fuse the two images based on the weight map. Experimental results show that the proposed method is effective in enhancing visible image and removing the haze.

Efficient Object Tracking System Using the Fusion of a CCD Camera and an Infrared Camera (CCD카메라와 적외선 카메라의 융합을 통한 효과적인 객체 추적 시스템)

  • Kim, Seung-Hun;Jung, Il-Kyun;Park, Chang-Woo;Hwang, Jung-Hoon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.3
    • /
    • pp.229-235
    • /
    • 2011
  • To make a robust object tracking and identifying system for an intelligent robot and/or home system, heterogeneous sensor fusion between visible ray system and infrared ray system is proposed. The proposed system separates the object by combining the ROI (Region of Interest) estimated from two different images based on a heterogeneous sensor that consolidates the ordinary CCD camera and the IR (Infrared) camera. Human's body and face are detected in both images by using different algorithms, such as histogram, optical-flow, skin-color model and Haar model. Also the pose of human body is estimated from the result of body detection in IR image by using PCA algorithm along with AdaBoost algorithm. Then, the results from each detection algorithm are fused to extract the best detection result. To verify the heterogeneous sensor fusion system, few experiments were done in various environments. From the experimental results, the system seems to have good tracking and identification performance regardless of the environmental changes. The application area of the proposed system is not limited to robot or home system but the surveillance system and military system.

Robust Face Recognition Against Illumination Change Using Visible and Infrared Images (가시광선 영상과 적외선 영상의 융합을 이용한 조명변화에 강인한 얼굴 인식)

  • Kim, Sa-Mun;Lee, Dea-Jong;Song, Chang-Kyu;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.4
    • /
    • pp.343-348
    • /
    • 2014
  • Face recognition system has advanctage to automatically recognize a person without causing repulsion at deteciton process. However, the face recognition system has a drawback to show lower perfomance according to illumination variation unlike the other biometric systems using fingerprint and iris. Therefore, this paper proposed a robust face recogntion method against illumination varition by slective fusion technique using both visible and infrared faces based on fuzzy linear disciment analysis(fuzzy-LDA). In the first step, both the visible image and infrared image are divided into four bands using wavelet transform. In the second step, Euclidean distance is calculated at each subband. In the third step, recognition rate is determined at each subband using the Euclidean distance calculated in the second step. And then, weights are determined by considering the recognition rate of each band. Finally, a fusion face recognition is performed and robust recognition results are obtained.

Automatic Image Registration Based on Extraction of Corresponding-Points for Multi-Sensor Image Fusion (다중센서 영상융합을 위한 대응점 추출에 기반한 자동 영상정합 기법)

  • Choi, Won-Chul;Jung, Jik-Han;Park, Dong-Jo;Choi, Byung-In;Choi, Sung-Nam
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.12 no.4
    • /
    • pp.524-531
    • /
    • 2009
  • In this paper, we propose an automatic image registration method for multi-sensor image fusion such as visible and infrared images. The registration is achieved by finding corresponding feature points in both input images. In general, the global statistical correlation is not guaranteed between multi-sensor images, which bring out difficulties on the image registration for multi-sensor images. To cope with this problem, mutual information is adopted to measure correspondence of features and to select faithful points. An update algorithm for projective transform is also proposed. Experimental results show that the proposed method provides robust and accurate registration results.