• 제목/요약/키워드: Image fusion enhancement

검색결과 45건 처리시간 0.023초

Single Image-based Enhancement Techniques for Underwater Optical Imaging

  • Kim, Do Gyun;Kim, Soo Mee
    • 한국해양공학회지
    • /
    • 제34권6호
    • /
    • pp.442-453
    • /
    • 2020
  • Underwater color images suffer from low visibility and color cast effects caused by light attenuation by water and floating particles. This study applied single image enhancement techniques to enhance the quality of underwater images and compared their performance with real underwater images taken in Korean waters. Dark channel prior (DCP), gradient transform, image fusion, and generative adversarial networks (GAN), such as cycleGAN and underwater GAN (UGAN), were considered for single image enhancement. Their performance was evaluated in terms of underwater image quality measure, underwater color image quality evaluation, gray-world assumption, and blur metric. The DCP saturated the underwater images to a specific greenish or bluish color tone and reduced the brightness of the background signal. The gradient transform method with two transmission maps were sensitive to the light source and highlighted the region exposed to light. Although image fusion enabled reasonable color correction, the object details were lost due to the last fusion step. CycleGAN corrected overall color tone relatively well but generated artifacts in the background. UGAN showed good visual quality and obtained the highest scores against all figures of merit (FOMs) by compensating for the colors and visibility compared to the other single enhancement methods.

X-Ray Image Enhancement Using a Boundary Division Wiener Filter and Wavelet-Based Image Fusion Approach

  • Khan, Sajid Ullah;Chai, Wang Yin;See, Chai Soo;Khan, Amjad
    • Journal of Information Processing Systems
    • /
    • 제12권1호
    • /
    • pp.35-45
    • /
    • 2016
  • To resolve the problems of Poisson/impulse noise, blurriness, and sharpness in degraded X-ray images, a novel and efficient enhancement algorithm based on X-ray image fusion using a discrete wavelet transform is proposed in this paper. The proposed algorithm consists of two basics. First, it applies the techniques of boundary division to detect Poisson and impulse noise corrupted pixels and then uses the Wiener filter approach to restore those corrupted pixels. Second, it applies the sharpening technique to the same degraded X-ray image. Thus, it has two source X-ray images, which individually preserve the enhancement effects. The details and approximations of these sources X-ray images are fused via different fusion rules in the wavelet domain. The results of the experiment show that the proposed algorithm successfully combines the merits of the Wiener filter and sharpening and achieves a significant proficiency in the enhancement of degraded X-ray images exhibiting Poisson noise, blurriness, and edge details.

BBWE와 MHMD를 이용한 피라미드 융합 기반의 영상의 대조 개선 기법 (An Image Contrast Enhancement Method based on Pyramid Fusion Using BBWE and MHMD)

  • 이동렬;김진헌
    • 한국멀티미디어학회논문지
    • /
    • 제16권11호
    • /
    • pp.1250-1260
    • /
    • 2013
  • 라플라스 피라미드 영상 융합 기반의 대조비 강화기법은 각 자원 영상에서 바람직한 화소를 선택하여 융합할 수 있기 때문에 영상 정보를 충실하게 표현하는 장점이 있다. 하지만 정보 평가를 화소 단위로 수행하기 때문에 영상 잡음에 취약한 문제점을 갖고 있다. 본 논문에서는 영상잡음을 억제하는 개선된 영상 융합기반의 대조비 개선 방법을 제안한다. 제안된 기법은 자원 영상에 대해 블록 기반의 지역적 노출 적절성과 지역적 동질성의 차를 측정하여 이를 기반으로 가중치 맵을 생성하고 라플라스 피라미드를 구축하여 영상을 결합한다. 다양한 영상에 대한 실험을 통해 종래의 기법에 비해 영상 잡음을 배제된 영상을 만들어 낼 수 있음을 보였다.

Attention-based for Multiscale Fusion Underwater Image Enhancement

  • Huang, Zhixiong;Li, Jinjiang;Hua, Zhen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권2호
    • /
    • pp.544-564
    • /
    • 2022
  • Underwater images often suffer from color distortion, blurring and low contrast, which is caused by the propagation of light in the underwater environment being affected by the two processes: absorption and scattering. To cope with the poor quality of underwater images, this paper proposes a multiscale fusion underwater image enhancement method based on channel attention mechanism and local binary pattern (LBP). The network consists of three modules: feature aggregation, image reconstruction and LBP enhancement. The feature aggregation module aggregates feature information at different scales of the image, and the image reconstruction module restores the output features to high-quality underwater images. The network also introduces channel attention mechanism to make the network pay more attention to the channels containing important information. The detail information is protected by real-time superposition with feature information. Experimental results demonstrate that the method in this paper produces results with correct colors and complete details, and outperforms existing methods in quantitative metrics.

Comparison of GAN Deep Learning Methods for Underwater Optical Image Enhancement

  • Kim, Hong-Gi;Seo, Jung-Min;Kim, Soo Mee
    • 한국해양공학회지
    • /
    • 제36권1호
    • /
    • pp.32-40
    • /
    • 2022
  • Underwater optical images face various limitations that degrade the image quality compared with optical images taken in our atmosphere. Attenuation according to the wavelength of light and reflection by very small floating objects cause low contrast, blurry clarity, and color degradation in underwater images. We constructed an image data of the Korean sea and enhanced it by learning the characteristics of underwater images using the deep learning techniques of CycleGAN (cycle-consistent adversarial network), UGAN (underwater GAN), FUnIE-GAN (fast underwater image enhancement GAN). In addition, the underwater optical image was enhanced using the image processing technique of Image Fusion. For a quantitative performance comparison, UIQM (underwater image quality measure), which evaluates the performance of the enhancement in terms of colorfulness, sharpness, and contrast, and UCIQE (underwater color image quality evaluation), which evaluates the performance in terms of chroma, luminance, and saturation were calculated. For 100 underwater images taken in Korean seas, the average UIQMs of CycleGAN, UGAN, and FUnIE-GAN were 3.91, 3.42, and 2.66, respectively, and the average UCIQEs were measured to be 29.9, 26.77, and 22.88, respectively. The average UIQM and UCIQE of Image Fusion were 3.63 and 23.59, respectively. CycleGAN and UGAN qualitatively and quantitatively improved the image quality in various underwater environments, and FUnIE-GAN had performance differences depending on the underwater environment. Image Fusion showed good performance in terms of color correction and sharpness enhancement. It is expected that this method can be used for monitoring underwater works and the autonomous operation of unmanned vehicles by improving the visibility of underwater situations more accurately.

캐스케이드 융합 기반 다중 스케일 열화상 향상 기법 (Cascade Fusion-Based Multi-Scale Enhancement of Thermal Image)

  • 이경재
    • 한국전자통신학회논문지
    • /
    • 제19권1호
    • /
    • pp.301-307
    • /
    • 2024
  • 본 연구는 다양한 스케일 조건에서 열화상 이미지를 향상시키기 위한 새로운 캐스케이드 융합 구조를 제안한다. 특정 스케일에 맞춰 설계된 방법들은 다중 스케일에서 열화상 이미지 처리에 한계가 있었다. 이를 극복하기 위해 본 논문에서는 다중 스케일 표현을 활용하는 캐스케이드 특징 융합 기법에 기반한 통합 프레임워크를 제시한다. 서로 다른 스케일의 신뢰도 맵을 순차적으로 융합함으로써 스케일에 제약받지 않는 학습이 가능해진다. 제안된 구조는 상호 스케일 의존성을 강화하기 위해 엔드 투 엔드 방식으로 훈련된 합성곱 신경망으로 구성되어 있다. 실험 결과, 제안된 방법은 기존의 다중 스케일 열화상 이미지 향상 방법들보다 우수한 성능을 보인다는 것을 확인할 수 있었다. 또한, 실험 데이터셋에 대한 성능 분석 결과 이미지 품질 지표가 일관되게 개선되었으며, 이는 캐스케이드 융합 설계가 스케일 간 견고한 일반화를 가능하게 하고 교차 스케일 표현 학습을 더 효율적으로 수행하는 데 기여하는 것을 보여준다.

Single Image Enhancement Using Inter-channel Correlation

  • Kim, Jin;Jeong, Soowoong;Kim, Yong-Ho;Lee, Sangkeun
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제2권3호
    • /
    • pp.130-139
    • /
    • 2013
  • This paper proposes a new approach for enhancing digital images based on red channel information, which has the most analogous characteristics to invisible infrared rays. Specifically, a red channel in RGB space is used to analyze the image contents and improve the visual quality of the input images but it can cause unexpected problems, such as the over-enhancement of reddish input images. To resolve this problem, inter-channel correlations between the color channels were derived, and the weighting parameters for visually pleasant image fusion were estimated. Applying the parameters resulted in significant brightness as well as improvement in the dark and bright regions. Furthermore, simple contrast and color corrections were used to maintain the original contrast level and color tone. The main advantages of the proposed algorithm are 1) it can improve a given image considerably with a simple inter-channel correlation, 2) it can obtain a similar effect of using an extra infrared image, and 3) it is faster than other algorithms compared without artifacts including halo effects. The experimental results showed that the proposed approach could produce better natural images than the existing enhancement algorithms. Therefore, the proposed scheme can be a useful tool for improving the image quality in consumer imaging devices, such as compact cameras.

  • PDF

An Improved Multi-resolution image fusion framework using image enhancement technique

  • Jhee, Hojin;Jang, Chulhee;Jin, Sanghun;Hong, Yonghee
    • 한국컴퓨터정보학회논문지
    • /
    • 제22권12호
    • /
    • pp.69-77
    • /
    • 2017
  • This paper represents a novel framework for multi-scale image fusion. Multi-scale Kalman Smoothing (MKS) algorithm with quad-tree structure can provide a powerful multi-resolution image fusion scheme by employing Markov property. In general, such approach provides outstanding image fusion performance in terms of accuracy and efficiency, however, quad-tree based method is often limited to be applied in certain applications due to its stair-like covariance structure, resulting in unrealistic blocky artifacts at the fusion result where finest scale data are void or missed. To mitigate this structural artifact, in this paper, a new scheme of multi-scale fusion framework is proposed. By employing Super Resolution (SR) technique on MKS algorithm, fine resolved measurement is generated and blended through the tree structure such that missed detail information at data missing region in fine scale image is properly inferred and the blocky artifact can be successfully suppressed at fusion result. Simulation results show that the proposed method provides significantly improved fusion results in the senses of both Root Mean Square Error (RMSE) performance and visual improvement over conventional MKS algorithm.

빠른 IHS 기법을 이용한 IKONOS 영상융합 (IKONOS Image Fusion Using a Fast Intensity-Hue-Saturation Fusion Technique)

  • 윤공현
    • 대한공간정보학회지
    • /
    • 제14권1호
    • /
    • pp.21-27
    • /
    • 2006
  • 영상융합의 많은 방법들 중에 IHS 기법은 많은 대용량의 자료를 빨리 융합할 수 있는 장점을 가지고 있다. IKONOS 영상에 대하여 IHS기법은 향상된 공간해상도의 결과를 보여주고 있으나 분광의 왜곡을 포함하고 있다. 즉, 융합된 다중파장대 영상과 원래 다중파장대영상의 비교시 분광정보의 왜곡이 나타난다. 이러한 문제를 해결하기 위해서 본 연구에서는 분광정보 조정을 통하여 빠른 처리 속도를 지니는 IHS기법을 제안하였다. 실험결과 제안된 방법은 고전적인 IHS 융합기법보다 속도와 영상 질의 측면에서 더 나은 결과를 보여주었다.

  • PDF

Dual Exposure Fusion with Entropy-based Residual Filtering

  • Heo, Yong Seok;Lee, Soochahn;Jung, Ho Yub
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권5호
    • /
    • pp.2555-2575
    • /
    • 2017
  • This paper presents a dual exposure fusion method for image enhancement. Images taken with a short exposure time usually contain a sharp structure, but they are dark and are prone to be contaminated by noise. In contrast, long-exposure images are bright and noise-free, but usually suffer from blurring artifacts. Thus, we fuse the dual exposures to generate an enhanced image that is well-exposed, noise-free, and blur-free. To this end, we present a new scale-space patch-match method to find correspondences between the short and long exposures so that proper color components can be combined within a proposed dual non-local (DNL) means framework. We also present a residual filtering method that eliminates the structure component in the estimated noise image in order to obtain a sharper and further enhanced image. To this end, the entropy is utilized to determine the proper size of the filtering window. Experimental results show that our method generates ghost-free, noise-free, and blur-free enhanced images from the short and long exposure pairs for various dynamic scenes.