• Title/Summary/Keyword: Image fusion enhancement

Search Result 45, Processing Time 0.019 seconds

Single Image-based Enhancement Techniques for Underwater Optical Imaging

  • Kim, Do Gyun;Kim, Soo Mee
    • Journal of Ocean Engineering and Technology
    • /
    • v.34 no.6
    • /
    • pp.442-453
    • /
    • 2020
  • Underwater color images suffer from low visibility and color cast effects caused by light attenuation by water and floating particles. This study applied single image enhancement techniques to enhance the quality of underwater images and compared their performance with real underwater images taken in Korean waters. Dark channel prior (DCP), gradient transform, image fusion, and generative adversarial networks (GAN), such as cycleGAN and underwater GAN (UGAN), were considered for single image enhancement. Their performance was evaluated in terms of underwater image quality measure, underwater color image quality evaluation, gray-world assumption, and blur metric. The DCP saturated the underwater images to a specific greenish or bluish color tone and reduced the brightness of the background signal. The gradient transform method with two transmission maps were sensitive to the light source and highlighted the region exposed to light. Although image fusion enabled reasonable color correction, the object details were lost due to the last fusion step. CycleGAN corrected overall color tone relatively well but generated artifacts in the background. UGAN showed good visual quality and obtained the highest scores against all figures of merit (FOMs) by compensating for the colors and visibility compared to the other single enhancement methods.

X-Ray Image Enhancement Using a Boundary Division Wiener Filter and Wavelet-Based Image Fusion Approach

  • Khan, Sajid Ullah;Chai, Wang Yin;See, Chai Soo;Khan, Amjad
    • Journal of Information Processing Systems
    • /
    • v.12 no.1
    • /
    • pp.35-45
    • /
    • 2016
  • To resolve the problems of Poisson/impulse noise, blurriness, and sharpness in degraded X-ray images, a novel and efficient enhancement algorithm based on X-ray image fusion using a discrete wavelet transform is proposed in this paper. The proposed algorithm consists of two basics. First, it applies the techniques of boundary division to detect Poisson and impulse noise corrupted pixels and then uses the Wiener filter approach to restore those corrupted pixels. Second, it applies the sharpening technique to the same degraded X-ray image. Thus, it has two source X-ray images, which individually preserve the enhancement effects. The details and approximations of these sources X-ray images are fused via different fusion rules in the wavelet domain. The results of the experiment show that the proposed algorithm successfully combines the merits of the Wiener filter and sharpening and achieves a significant proficiency in the enhancement of degraded X-ray images exhibiting Poisson noise, blurriness, and edge details.

An Image Contrast Enhancement Method based on Pyramid Fusion Using BBWE and MHMD (BBWE와 MHMD를 이용한 피라미드 융합 기반의 영상의 대조 개선 기법)

  • Lee, Dong-Yul;Kim, Jin Heon
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.11
    • /
    • pp.1250-1260
    • /
    • 2013
  • The contrast enhancement techniques based on Laplacian pyramid image fusion have a benefit that they can faithfully describe the image information because they combine the multiple resource images by selecting the desired pixel in each image. However, they also have some problem that the output image may contain noise, because the methods evaluate the visual information on the basis of each pixel. In this paper, an improved contrast enhancement method, which effectively suppresses the noise, using image fusion is proposed. The proposed method combines the resource images by making Laplacian pyramids generated from weight maps, which are produced by measuring the difference between the block-based local well exposedness and local homogeneity for each resource image. We showed the proposed method could produce less noisy images compared to the conventional techniques in the test for various images.

Attention-based for Multiscale Fusion Underwater Image Enhancement

  • Huang, Zhixiong;Li, Jinjiang;Hua, Zhen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.544-564
    • /
    • 2022
  • Underwater images often suffer from color distortion, blurring and low contrast, which is caused by the propagation of light in the underwater environment being affected by the two processes: absorption and scattering. To cope with the poor quality of underwater images, this paper proposes a multiscale fusion underwater image enhancement method based on channel attention mechanism and local binary pattern (LBP). The network consists of three modules: feature aggregation, image reconstruction and LBP enhancement. The feature aggregation module aggregates feature information at different scales of the image, and the image reconstruction module restores the output features to high-quality underwater images. The network also introduces channel attention mechanism to make the network pay more attention to the channels containing important information. The detail information is protected by real-time superposition with feature information. Experimental results demonstrate that the method in this paper produces results with correct colors and complete details, and outperforms existing methods in quantitative metrics.

Comparison of GAN Deep Learning Methods for Underwater Optical Image Enhancement

  • Kim, Hong-Gi;Seo, Jung-Min;Kim, Soo Mee
    • Journal of Ocean Engineering and Technology
    • /
    • v.36 no.1
    • /
    • pp.32-40
    • /
    • 2022
  • Underwater optical images face various limitations that degrade the image quality compared with optical images taken in our atmosphere. Attenuation according to the wavelength of light and reflection by very small floating objects cause low contrast, blurry clarity, and color degradation in underwater images. We constructed an image data of the Korean sea and enhanced it by learning the characteristics of underwater images using the deep learning techniques of CycleGAN (cycle-consistent adversarial network), UGAN (underwater GAN), FUnIE-GAN (fast underwater image enhancement GAN). In addition, the underwater optical image was enhanced using the image processing technique of Image Fusion. For a quantitative performance comparison, UIQM (underwater image quality measure), which evaluates the performance of the enhancement in terms of colorfulness, sharpness, and contrast, and UCIQE (underwater color image quality evaluation), which evaluates the performance in terms of chroma, luminance, and saturation were calculated. For 100 underwater images taken in Korean seas, the average UIQMs of CycleGAN, UGAN, and FUnIE-GAN were 3.91, 3.42, and 2.66, respectively, and the average UCIQEs were measured to be 29.9, 26.77, and 22.88, respectively. The average UIQM and UCIQE of Image Fusion were 3.63 and 23.59, respectively. CycleGAN and UGAN qualitatively and quantitatively improved the image quality in various underwater environments, and FUnIE-GAN had performance differences depending on the underwater environment. Image Fusion showed good performance in terms of color correction and sharpness enhancement. It is expected that this method can be used for monitoring underwater works and the autonomous operation of unmanned vehicles by improving the visibility of underwater situations more accurately.

Cascade Fusion-Based Multi-Scale Enhancement of Thermal Image (캐스케이드 융합 기반 다중 스케일 열화상 향상 기법)

  • Kyung-Jae Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.301-307
    • /
    • 2024
  • This study introduces a novel cascade fusion architecture aimed at enhancing thermal images across various scale conditions. The processing of thermal images at multiple scales has been challenging due to the limitations of existing methods that are designed for specific scales. To overcome these limitations, this paper proposes a unified framework that utilizes cascade feature fusion to effectively learn multi-scale representations. Confidence maps from different image scales are fused in a cascaded manner, enabling scale-invariant learning. The architecture comprises end-to-end trained convolutional neural networks to enhance image quality by reinforcing mutual scale dependencies. Experimental results indicate that the proposed technique outperforms existing methods in multi-scale thermal image enhancement. Performance evaluation results are provided, demonstrating consistent improvements in image quality metrics. The cascade fusion design facilitates robust generalization across scales and efficient learning of cross-scale representations.

Single Image Enhancement Using Inter-channel Correlation

  • Kim, Jin;Jeong, Soowoong;Kim, Yong-Ho;Lee, Sangkeun
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.2 no.3
    • /
    • pp.130-139
    • /
    • 2013
  • This paper proposes a new approach for enhancing digital images based on red channel information, which has the most analogous characteristics to invisible infrared rays. Specifically, a red channel in RGB space is used to analyze the image contents and improve the visual quality of the input images but it can cause unexpected problems, such as the over-enhancement of reddish input images. To resolve this problem, inter-channel correlations between the color channels were derived, and the weighting parameters for visually pleasant image fusion were estimated. Applying the parameters resulted in significant brightness as well as improvement in the dark and bright regions. Furthermore, simple contrast and color corrections were used to maintain the original contrast level and color tone. The main advantages of the proposed algorithm are 1) it can improve a given image considerably with a simple inter-channel correlation, 2) it can obtain a similar effect of using an extra infrared image, and 3) it is faster than other algorithms compared without artifacts including halo effects. The experimental results showed that the proposed approach could produce better natural images than the existing enhancement algorithms. Therefore, the proposed scheme can be a useful tool for improving the image quality in consumer imaging devices, such as compact cameras.

  • PDF

An Improved Multi-resolution image fusion framework using image enhancement technique

  • Jhee, Hojin;Jang, Chulhee;Jin, Sanghun;Hong, Yonghee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.12
    • /
    • pp.69-77
    • /
    • 2017
  • This paper represents a novel framework for multi-scale image fusion. Multi-scale Kalman Smoothing (MKS) algorithm with quad-tree structure can provide a powerful multi-resolution image fusion scheme by employing Markov property. In general, such approach provides outstanding image fusion performance in terms of accuracy and efficiency, however, quad-tree based method is often limited to be applied in certain applications due to its stair-like covariance structure, resulting in unrealistic blocky artifacts at the fusion result where finest scale data are void or missed. To mitigate this structural artifact, in this paper, a new scheme of multi-scale fusion framework is proposed. By employing Super Resolution (SR) technique on MKS algorithm, fine resolved measurement is generated and blended through the tree structure such that missed detail information at data missing region in fine scale image is properly inferred and the blocky artifact can be successfully suppressed at fusion result. Simulation results show that the proposed method provides significantly improved fusion results in the senses of both Root Mean Square Error (RMSE) performance and visual improvement over conventional MKS algorithm.

IKONOS Image Fusion Using a Fast Intensity-Hue-Saturation Fusion Technique (빠른 IHS 기법을 이용한 IKONOS 영상융합)

  • Yun, Kong-Hyun
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.14 no.1 s.35
    • /
    • pp.21-27
    • /
    • 2006
  • Among various image fusion methods, intensity-hue-saturation(IHS) technique is capable of quickly merging the massive volumes of data. For IKONOS imagery, IHS can yield satisfactory 'spatial' enhancement but may introduce 'spectral' distortion, appearing as a change in colors between compositions of resampled and fused multispectral bands. To solve this problem a fast IHS fusion technique with spectral adjustment is presented. The experimental results demonstrate that the proposed approach can provide better performance than the conventional IHS method, in both processing speed and image quality.

  • PDF

Dual Exposure Fusion with Entropy-based Residual Filtering

  • Heo, Yong Seok;Lee, Soochahn;Jung, Ho Yub
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.5
    • /
    • pp.2555-2575
    • /
    • 2017
  • This paper presents a dual exposure fusion method for image enhancement. Images taken with a short exposure time usually contain a sharp structure, but they are dark and are prone to be contaminated by noise. In contrast, long-exposure images are bright and noise-free, but usually suffer from blurring artifacts. Thus, we fuse the dual exposures to generate an enhanced image that is well-exposed, noise-free, and blur-free. To this end, we present a new scale-space patch-match method to find correspondences between the short and long exposures so that proper color components can be combined within a proposed dual non-local (DNL) means framework. We also present a residual filtering method that eliminates the structure component in the estimated noise image in order to obtain a sharper and further enhanced image. To this end, the entropy is utilized to determine the proper size of the filtering window. Experimental results show that our method generates ghost-free, noise-free, and blur-free enhanced images from the short and long exposure pairs for various dynamic scenes.