• Title/Summary/Keyword: Visible-infrared image fusion

Search Result 29, Processing Time 0.026 seconds

Implementation of Wavelet Transform based Image Fusion and JPEG2000 using MAD Order Statistics for Multi-Image (MAD 순서통계량을 이용한 웨이블렛 변환기반 다중영상의 영상융합 및 JPEG2000 보드 구현)

  • Lee, Cheeol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.11
    • /
    • pp.2636-2644
    • /
    • 2013
  • This paper is proposed a wavelet-based the order statistics MAD(Median Absolute Deviation) method of image fusion of Multi-image contaminated with visible image and infrared image. also The method of compared and defined the threshold the wavelet coefficients using MAD of the wavelet coefficients of the detail subbands was proposed to effectively fusion which of selected the high quality image of the two images. The existed fusion rule may be possible to get the distorted fusion image especially by the distortion in the relation between the pixel and indicator of two images in the existed fusion rules. In order to complement the disadvantage, the threshold of the proposed method sets up the image statistic and excludes the distortion. The hardware design is used FPGA of Xilinx and DSP system for the image fusion and compressed encoding of the proposed algorithm. Therefore the proposed method is totally verified by comparing with the several other multi-image and the proposed image fusion.

Visible and SWIR Satellite Image Fusion Using Multi-Resolution Transform Method Based on Haze-Guided Weight Map (Haze-Guided Weight Map 기반 다중해상도 변환 기법을 활용한 가시광 및 SWIR 위성영상 융합)

  • Taehong Kwak;Yongil Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.3
    • /
    • pp.283-295
    • /
    • 2023
  • With the development of sensor and satellite technology, numerous high-resolution and multi-spectral satellite images have been available. Due to their wavelength-dependent reflection, transmission, and scattering characteristics, multi-spectral satellite images can provide complementary information for earth observation. In particular, the short-wave infrared (SWIR) band can penetrate certain types of atmospheric aerosols from the benefit of the reduced Rayleigh scattering effect, which allows for a clearer view and more detailed information to be captured from hazed surfaces compared to the visible band. In this study, we proposed a multi-resolution transform-based image fusion method to combine visible and SWIR satellite images. The purpose of the fusion method is to generate a single integrated image that incorporates complementary information such as detailed background information from the visible band and land cover information in the haze region from the SWIR band. For this purpose, this study applied the Laplacian pyramid-based multi-resolution transform method, which is a representative image decomposition approach for image fusion. Additionally, we modified the multiresolution fusion method by combining a haze-guided weight map based on the prior knowledge that SWIR bands contain more information in pixels from the haze region. The proposed method was validated using very high-resolution satellite images from Worldview-3, containing multi-spectral visible and SWIR bands. The experimental data including hazed areas with limited visibility caused by smoke from wildfires was utilized to validate the penetration properties of the proposed fusion method. Both quantitative and visual evaluations were conducted using image quality assessment indices. The results showed that the bright features from the SWIR bands in the hazed areas were successfully fused into the integrated feature maps without any loss of detailed information from the visible bands.

Near-infrared face recognition by fusion of E-GV-LBP and FKNN

  • Li, Weisheng;Wang, Lidou
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.1
    • /
    • pp.208-223
    • /
    • 2015
  • To solve the problem of face recognition with complex changes and further improve the efficiency, a new near-infrared face recognition algorithm which fuses E-GV-LBP and FKNN algorithm is proposed. Firstly, it transforms near infrared face image by Gabor wavelet. Then, it extracts LBP coding feature that contains space, scale and direction information. Finally, this paper introduces an improved FKNN algorithm which is based on spatial domain. The proposed approach has brought face recognition more quickly and accurately. The experiment results show that the new algorithm has improved the recognition accuracy and computing time under the near-infrared light and other complex changes. In addition, this method can be used for face recognition under visible light as well.

Wavelet Transform based Image Registration using MCDT Method for Multi-Image

  • Lee, Choel;Lee, Jungsuk;Jung, Kyedong;Lee, Jong-Yong
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.7 no.1
    • /
    • pp.36-41
    • /
    • 2015
  • This paper is proposed a wavelet-based MCDT(Mask Coefficient Differential and Threshold) method of image registration of Multi-images contaminated with visible image and infrared image. The method for ensure reliability of the image registration is to the increase statistical corelation as getting the common feature points between two images. The method of threshold the wavelet coefficients using derivatives of the wavelet coefficients of the detail subbands was proposed to effectively registration images with distortion. And it can define that the edge map. Particularly, in order to increase statistical corelation the method of the normalized mutual information. as similarity measure common feature between two images was selected. The proposed method is totally verified by comparing with the several other multi-image and the proposed image registration.

Multimodal Image Fusion with Human Pose for Illumination-Robust Detection of Human Abnormal Behaviors (조명을 위한 인간 자세와 다중 모드 이미지 융합 - 인간의 이상 행동에 대한 강력한 탐지)

  • Cuong H. Tran;Seong G. Kong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.637-640
    • /
    • 2023
  • This paper presents multimodal image fusion with human pose for detecting abnormal human behaviors in low illumination conditions. Detecting human behaviors in low illumination conditions is challenging due to its limited visibility of the objects of interest in the scene. Multimodal image fusion simultaneously combines visual information in the visible spectrum and thermal radiation information in the long-wave infrared spectrum. We propose an abnormal event detection scheme based on the multimodal fused image and the human poses using the keypoints to characterize the action of the human body. Our method assumes that human behaviors are well correlated to body keypoints such as shoulders, elbows, wrists, hips. In detail, we extracted the human keypoint coordinates from human targets in multimodal fused videos. The coordinate values are used as inputs to train a multilayer perceptron network to classify human behaviors as normal or abnormal. Our experiment demonstrates a significant result on multimodal imaging dataset. The proposed model can capture the complex distribution pattern for both normal and abnormal behaviors.

Improvement of Mid-Wave Infrared Image Visibility Using Edge Information of KOMPSAT-3A Panchromatic Image (KOMPSAT-3A 전정색 영상의 윤곽 정보를 이용한 중적외선 영상 시인성 개선)

  • Jinmin Lee;Taeheon Kim;Hanul Kim;Hongtak Lee;Youkyung Han
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1283-1297
    • /
    • 2023
  • Mid-wave infrared (MWIR) imagery, due to its ability to capture the temperature of land cover and objects, serves as a crucial data source in various fields including environmental monitoring and defense. The KOMPSAT-3A satellite acquires MWIR imagery with high spatial resolution compared to other satellites. However, the limited spatial resolution of MWIR imagery, in comparison to electro-optical (EO) imagery, constrains the optimal utilization of the KOMPSAT-3A data. This study aims to create a highly visible MWIR fusion image by leveraging the edge information from the KOMPSAT-3A panchromatic (PAN) image. Preprocessing is implemented to mitigate the relative geometric errors between the PAN and MWIR images. Subsequently, we employ a pre-trained pixel difference network (PiDiNet), a deep learning-based edge information extraction technique, to extract the boundaries of objects from the preprocessed PAN images. The MWIR fusion imagery is then generated by emphasizing the brightness value corresponding to the edge information of the PAN image. To evaluate the proposed method, the MWIR fusion images were generated in three different sites. As a result, the boundaries of terrain and objects in the MWIR fusion images were emphasized to provide detailed thermal information of the interest area. Especially, the MWIR fusion image provided the thermal information of objects such as airplanes and ships which are hard to detect in the original MWIR images. This study demonstrated that the proposed method could generate a single image that combines visible details from an EO image and thermal information from an MWIR image, which contributes to increasing the usage of MWIR imagery.

Improvement of Land Cover Classification Accuracy by Optimal Fusion of Aerial Multi-Sensor Data

  • Choi, Byoung Gil;Na, Young Woo;Kwon, Oh Seob;Kim, Se Hun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.3
    • /
    • pp.135-152
    • /
    • 2018
  • The purpose of this study is to propose an optimal fusion method of aerial multi - sensor data to improve the accuracy of land cover classification. Recently, in the fields of environmental impact assessment and land monitoring, high-resolution image data has been acquired for many regions for quantitative land management using aerial multi-sensor, but most of them are used only for the purpose of the project. Hyperspectral sensor data, which is mainly used for land cover classification, has the advantage of high classification accuracy, but it is difficult to classify the accurate land cover state because only the visible and near infrared wavelengths are acquired and of low spatial resolution. Therefore, there is a need for research that can improve the accuracy of land cover classification by fusing hyperspectral sensor data with multispectral sensor and aerial laser sensor data. As a fusion method of aerial multisensor, we proposed a pixel ratio adjustment method, a band accumulation method, and a spectral graph adjustment method. Fusion parameters such as fusion rate, band accumulation, spectral graph expansion ratio were selected according to the fusion method, and the fusion data generation and degree of land cover classification accuracy were calculated by applying incremental changes to the fusion variables. Optimal fusion variables for hyperspectral data, multispectral data and aerial laser data were derived by considering the correlation between land cover classification accuracy and fusion variables.

Matching Points Extraction Between Optical and TIR Images by Using SURF and Local Phase Correlation (SURF와 지역적 위상 상관도를 활용한 광학 및 열적외선 영상 간 정합쌍 추출)

  • Han, You Kyung;Choi, Jae Wan
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.23 no.1
    • /
    • pp.81-88
    • /
    • 2015
  • Various satellite sensors having ranges of the visible, infrared, and thermal wavelengths have been launched due to the improvement of hardware technologies of satellite sensors development. According to the development of satellite sensors with various wavelength ranges, the fusion and integration of multisensor images are proceeded. Image matching process is an essential step for the application of multisensor images. Some algorithms, such as SIFT and SURF, have been proposed to co-register satellite images. However, when the existing algorithms are applied to extract matching points between optical and thermal images, high accuracy of co-registration might not be guaranteed because these images have difference spectral and spatial characteristics. In this paper, location of control points in a reference image is extracted by SURF, and then, location of their corresponding pairs is estimated from the correlation of the local similarity. In the case of local similarity, phase correlation method, which is based on fourier transformation, is applied. In the experiments by simulated, Landsat-8, and ASTER datasets, the proposed algorithm could extract reliable matching points compared to the existing SURF-based method.

High-Frequency Interchange Network for Multispectral Object Detection (다중 스펙트럼 객체 감지를 위한 고주파 교환 네트워크)

  • Park, Seon-Hoo;Yun, Jun-Seok;Yoo, Seok Bong;Han, Seunghwoi
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.8
    • /
    • pp.1121-1129
    • /
    • 2022
  • Object recognition is carried out using RGB images in various object recognition studies. However, RGB images in dark illumination environments or environments where target objects are occluded other objects cause poor object recognition performance. On the other hand, IR images provide strong object recognition performance in these environments because it detects infrared waves rather than visible illumination. In this paper, we propose an RGB-IR fusion model, high-frequency interchange network (HINet), which improves object recognition performance by combining only the strengths of RGB-IR image pairs. HINet connected two object detection models using a mutual high-frequency transfer (MHT) to interchange advantages between RGB-IR images. MHT converts each pair of RGB-IR images into a discrete cosine transform (DCT) spectrum domain to extract high-frequency information. The extracted high-frequency information is transmitted to each other's networks and utilized to improve object recognition performance. Experimental results show the superiority of the proposed network and present performance improvement of the multispectral object recognition task.