• Title/Summary/Keyword: Turbulence image restoration

Search Result 4, Processing Time 0.022 seconds

Turbulent-image Restoration Based on a Compound Multibranch Feature Fusion Network

  • Banglian Xu;Yao Fang;Leihong Zhang;Dawei Zhang;Lulu Zheng
    • Current Optics and Photonics
    • /
    • v.7 no.3
    • /
    • pp.237-247
    • /
    • 2023
  • In middle- and long-distance imaging systems, due to the atmospheric turbulence caused by temperature, wind speed, humidity, and so on, light waves propagating in the air are distorted, resulting in image-quality degradation such as geometric deformation and fuzziness. In remote sensing, astronomical observation, and traffic monitoring, image information loss due to degradation causes huge losses, so effective restoration of degraded images is very important. To restore images degraded by atmospheric turbulence, an image-restoration method based on improved compound multibranch feature fusion (CMFNetPro) was proposed. Based on the CMFNet network, an efficient channel-attention mechanism was used to replace the channel-attention mechanism to improve image quality and network efficiency. In the experiment, two-dimensional random distortion vector fields were used to construct two turbulent datasets with different degrees of distortion, based on the Google Landmarks Dataset v2 dataset. The experimental results showed that compared to the CMFNet, DeblurGAN-v2, and MIMO-UNet models, the proposed CMFNetPro network achieves better performance in both quality and training cost of turbulent-image restoration. In the mixed training, CMFNetPro was 1.2391 dB (weak turbulence), 0.8602 dB (strong turbulence) respectively higher in terms of peak signal-to-noise ratio and 0.0015 (weak turbulence), 0.0136 (strong turbulence) respectively higher in terms of structure similarity compared to CMFNet. CMFNetPro was 14.4 hours faster compared to the CMFNet. This provides a feasible scheme for turbulent-image restoration based on deep learning.

Restoration of Ghost Imaging in Atmospheric Turbulence Based on Deep Learning

  • Chenzhe Jiang;Banglian Xu;Leihong Zhang;Dawei Zhang
    • Current Optics and Photonics
    • /
    • v.7 no.6
    • /
    • pp.655-664
    • /
    • 2023
  • Ghost imaging (GI) technology is developing rapidly, but there are inevitably some limitations such as the influence of atmospheric turbulence. In this paper, we study a ghost imaging system in atmospheric turbulence and use a gamma-gamma (GG) model to simulate the medium to strong range of turbulence distribution. With a compressed sensing (CS) algorithm and generative adversarial network (GAN), the image can be restored well. We analyze the performance of correlation imaging, the influence of atmospheric turbulence and the restoration algorithm's effects. The restored image's peak signal-to-noise ratio (PSNR) and structural similarity index map (SSIM) increased to 21.9 dB and 0.67 dB, respectively. This proves that deep learning (DL) methods can restore a distorted image well, and it has specific significance for computational imaging in noisy and fuzzy environments.

Restoring Turbulent Images Based on an Adaptive Feature-fusion Multi-input-Multi-output Dense U-shaped Network

  • Haiqiang Qian;Leihong Zhang;Dawei Zhang;Kaimin Wang
    • Current Optics and Photonics
    • /
    • v.8 no.3
    • /
    • pp.215-224
    • /
    • 2024
  • In medium- and long-range optical imaging systems, atmospheric turbulence causes blurring and distortion of images, resulting in loss of image information. An image-restoration method based on an adaptive feature-fusion multi-input-multi-output (MIMO) dense U-shaped network (Unet) is proposed, to restore a single image degraded by atmospheric turbulence. The network's model is based on the MIMO-Unet framework and incorporates patch-embedding shallow-convolution modules. These modules help in extracting shallow features of images and facilitate the processing of the multi-input dense encoding modules that follow. The combination of these modules improves the model's ability to analyze and extract features effectively. An asymmetric feature-fusion module is utilized to combine encoded features at varying scales, facilitating the feature reconstruction of the subsequent multi-output decoding modules for restoration of turbulence-degraded images. Based on experimental results, the adaptive feature-fusion MIMO dense U-shaped network outperforms traditional restoration methods, CMFNet network models, and standard MIMO-Unet network models, in terms of image-quality restoration. It effectively minimizes geometric deformation and blurring of images.

Image deblurring via adaptive proximal conjugate gradient method

  • Pan, Han;Jing, Zhongliang;Li, Minzhe;Dong, Peng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.11
    • /
    • pp.4604-4622
    • /
    • 2015
  • It is not easy to reconstruct the geometrical characteristics of the distorted images captured by the devices. One of the most popular optimization methods is fast iterative shrinkage/ thresholding algorithm. In this paper, to deal with its approximation error and the turbulence of the decrease process, an adaptive proximal conjugate gradient (APCG) framework is proposed. It contains three stages. At first stage, a series of adaptive penalty matrices are generated iterate-to-iterate. Second, to trade off the reconstruction accuracy and the computational complexity of the resulting sub-problem, a practical solution is presented, which is characterized by solving the variable ellipsoidal-norm based sub-problem through exploiting the structure of the problem. Third, a correction step is introduced to improve the estimated accuracy. The numerical experiments of the proposed algorithm, in comparison to the favorable state-of-the-art methods, demonstrate the advantages of the proposed method and its potential.