• 제목/요약/키워드: Low-light image enhancement

검색결과 49건 처리시간 0.017초

자연스러운 저조도 영상 개선을 위한 비지도 학습 (Unsupervised Learning with Natural Low-light Image Enhancement)

  • 이헌상;손광훈;민동보
    • 한국멀티미디어학회논문지
    • /
    • 제23권2호
    • /
    • pp.135-145
    • /
    • 2020
  • Recently, deep-learning based methods for low-light image enhancement accomplish great success through supervised learning. However, they still suffer from the lack of sufficient training data due to difficulty of obtaining a large amount of low-/normal-light image pairs in real environments. In this paper, we propose an unsupervised learning approach for single low-light image enhancement using the bright channel prior (BCP), which gives the constraint that the brightest pixel in a small patch is likely to be close to 1. With this prior, pseudo ground-truth is first generated to establish an unsupervised loss function. The proposed enhancement network is then trained using the proposed unsupervised loss function. To the best of our knowledge, this is the first attempt that performs a low-light image enhancement through unsupervised learning. In addition, we introduce a self-attention map for preserving image details and naturalness in the enhanced result. We validate the proposed method on various public datasets, demonstrating that our method achieves competitive performance over state-of-the-arts.

Pixel-Wise Polynomial Estimation Model for Low-Light Image Enhancement

  • Muhammad Tahir Rasheed;Daming Shi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권9호
    • /
    • pp.2483-2504
    • /
    • 2023
  • Most existing low-light enhancement algorithms either use a large number of training parameters or lack generalization to real-world scenarios. This paper presents a novel lightweight and robust pixel-wise polynomial approximation-based deep network for low-light image enhancement. For mapping the low-light image to the enhanced image, pixel-wise higher-order polynomials are employed. A deep convolution network is used to estimate the coefficients of these higher-order polynomials. The proposed network uses multiple branches to estimate pixel values based on different receptive fields. With a smaller receptive field, the first branch enhanced local features, the second and third branches focused on medium-level features, and the last branch enhanced global features. The low-light image is downsampled by the factor of 2b-1 (b is the branch number) and fed as input to each branch. After combining the outputs of each branch, the final enhanced image is obtained. A comprehensive evaluation of our proposed network on six publicly available no-reference test datasets shows that it outperforms state-of-the-art methods on both quantitative and qualitative measures.

건설현장 내 객체검출 정확도 향상을 위한 저조도 영상 강화 기법에 관한 연구 (A Study on Low-Light Image Enhancement Technique for Improvement of Object Detection Accuracy in Construction Site)

  • 나종호;공준호;신휴성;윤일동
    • 터널과지하공간
    • /
    • 제34권3호
    • /
    • pp.208-217
    • /
    • 2024
  • AI영상 기반 건설현장 안전관리 모니터링 시스템 개발 및 적용하는 추세에 다양한 환경변화에 따른 위험 객체 탐지 딥러닝 모델 개발에 많은 연구적 관심이 쏟아지고 있다. 여러 환경 변화요인 중 저조도 조건에서 객체 검출 모델의 정확도는 현저히 감소하며, 저조도 환경을 고려한 학습을 수행하더라도 일관적인 객체 탐지 정확도를 확보할 수 없다. 이에 따라 저조도 영상을 강화하는 영상 전처리 기술의 필요성이 대두된다. 따라서, 본 논문은 취득된 건설 현장 영상 데이터를 활용하여 다양한 딥러닝 기반 저조도 영상 강화 모델(GLADNet, KinD, LLFlow, Zero-DCE)을 학습하고, 모델별 저조도 영상 강화 성능을 비교 검증실험을 진행하였다. 저조도 강화된 영상을 시각적으로 검증하였고, 영상품질 평가 지수(PSNR, SSIM, Delta-E)를 도입하여 정량적으로 분석하였다. 실험 결과, GLADNet의 저조도 영상 강화 성능이 정량·정성적 평가에서 우수한 결과를 보여줬으며, 저조도 영상 강화 모델로 적합한 것으로 분석되었다. 향후 딥러닝 기반 객체 검출 모델에 저조도 영상 강화 기법이 전처리 단계로 적용한다면, 저조도 환경에서 일관된 객체 검출 성능을 확보할 것으로 예상된다.

Preprocessing for High Quality Real-time Imaging Systems by Low-light Stretch Algorithm

  • Ngo, Dat;Kang, Bongsoon
    • 전기전자학회논문지
    • /
    • 제22권3호
    • /
    • pp.585-589
    • /
    • 2018
  • Consumer demand for high quality image/video services led to growing trend in image quality enhancement study. Therefore, recent years was a period of substantial progress in this research field. Through careful observation of the image quality after processing by image enhancement algorithms, we perceived that the dark region in the image usually suffered loss of contrast to a certain extent. In this paper, the low-light stretch preprocessing algorithm is, hence, proposed to resolve the aforementioned issue. The proposed approach is evaluated qualitatively and quantitatively against the well-known histogram equalization and Photoshop curve adjustment. The evaluation results validate the efficiency and superiority of the low-light stretch over the benchmarking methods. In addition, we also propose the 255MHz-capable hardware implementation to ease the process of incorporating low-light stretch into real-time imaging systems, such as aerial surveillance and monitoring with drones and driving aiding systems.

GAN-Based Local Lightness-Aware Enhancement Network for Underexposed Images

  • Chen, Yong;Huang, Meiyong;Liu, Huanlin;Zhang, Jinliang;Shao, Kaixin
    • Journal of Information Processing Systems
    • /
    • 제18권4호
    • /
    • pp.575-586
    • /
    • 2022
  • Uneven light in real-world causes visual degradation for underexposed regions. For these regions, insufficient consideration during enhancement procedure will result in over-/under-exposure, loss of details and color distortion. Confronting such challenges, an unsupervised low-light image enhancement network is proposed in this paper based on the guidance of the unpaired low-/normal-light images. The key components in our network include super-resolution module (SRM), a GAN-based low-light image enhancement network (LLIEN), and denoising-scaling module (DSM). The SRM improves the resolution of the low-light input images before illumination enhancement. Such design philosophy improves the effectiveness of texture details preservation by operating in high-resolution space. Subsequently, local lightness attention module in LLIEN effectively distinguishes unevenly illuminated areas and puts emphasis on low-light areas, ensuring the spatial consistency of illumination for locally underexposed images. Then, multiple discriminators, i.e., global discriminator, local region discriminator, and color discriminator performs assessment from different perspectives to avoid over-/under-exposure and color distortion, which guides the network to generate images that in line with human aesthetic perception. Finally, the DSM performs noise removal and obtains high-quality enhanced images. Both qualitative and quantitative experiments demonstrate that our approach achieves favorable results, which indicates its superior capacity on illumination and texture details restoration.

Single Low-Light Ghost-Free Image Enhancement via Deep Retinex Model

  • Liu, Yan;Lv, Bingxue;Wang, Jingwen;Huang, Wei;Qiu, Tiantian;Chen, Yunzhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권5호
    • /
    • pp.1814-1828
    • /
    • 2021
  • Low-light image enhancement is a key technique to overcome the quality degradation of photos taken under scotopic vision illumination conditions. The degradation includes low brightness, low contrast, and outstanding noise, which would seriously affect the vision of the human eye recognition ability and subsequent image processing. In this paper, we propose an approach based on deep learning and Retinex theory to enhance the low-light image, which includes image decomposition, illumination prediction, image reconstruction, and image optimization. The first three parts can reconstruct the enhanced image that suffers from low-resolution. To reduce the noise of the enhanced image and improve the image quality, a super-resolution algorithm based on the Laplacian pyramid network is introduced to optimize the image. The Laplacian pyramid network can improve the resolution of the enhanced image through multiple feature extraction and deconvolution operations. Furthermore, a combination loss function is explored in the network training stage to improve the efficiency of the algorithm. Extensive experiments and comprehensive evaluations demonstrate the strength of the proposed method, the result is closer to the real-world scene in lightness, color, and details. Besides, experiments also demonstrate that the proposed method with the single low-light image can achieve the same effect as multi-exposure image fusion algorithm and no ghost is introduced.

Zero Deep Curve 추정방식을 이용한 저조도에 강인한 비디오 개선 방법 (Low-Light Invariant Video Enhancement Scheme Using Zero Reference Deep Curve Estimation)

  • 최형석;양윤기
    • 한국멀티미디어학회논문지
    • /
    • 제25권8호
    • /
    • pp.991-998
    • /
    • 2022
  • Recently, object recognition using image/video signals is rapidly spreading on autonomous driving and mobile phones. However, the actual input image/video signals are easily exposed to a poor illuminance environment. A recent researches for improving illumination enable to estimate and compensate the illumination parameters. In this study, we propose VE-DCE (video enhancement zero-reference deep curve estimation) to improve the illumination of low-light images. The proposed VE-DCE uses unsupervised learning-based zero-reference deep curve, which is one of the latest among learning based estimation techniques. Experimental results show that the proposed method can achieve the quality of low-light video as well as images compared to the previous method. In addition, it can reduce the computational complexity with respect to the existing method.

EDMFEN: Edge detection-based multi-scale feature enhancement Network for low-light image enhancement

  • Canlin Li;Shun Song;Pengcheng Gao;Wei Huang;Lihua Bi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권4호
    • /
    • pp.980-997
    • /
    • 2024
  • To improve the brightness of images and reveal hidden information in dark areas is the main objective of low-light image enhancement (LLIE). LLIE methods based on deep learning show good performance. However, there are some limitations to these methods, such as the complex network model requires highly configurable environments, and deficient enhancement of edge details leads to blurring of the target content. Single-scale feature extraction results in the insufficient recovery of the hidden content of the enhanced images. This paper proposed an edge detection-based multi-scale feature enhancement network for LLIE (EDMFEN). To reduce the loss of edge details in the enhanced images, an edge extraction module consisting of a Sobel operator is introduced to obtain edge information by computing gradients of images. In addition, a multi-scale feature enhancement module (MSFEM) consisting of multi-scale feature extraction block (MSFEB) and a spatial attention mechanism is proposed to thoroughly recover the hidden content of the enhanced images and obtain richer features. Since the fused features may contain some useless information, the MSFEB is introduced so as to obtain the image features with different perceptual fields. To use the multi-scale features more effectively, a spatial attention mechanism module is used to retain the key features and improve the model performance after fusing multi-scale features. Experimental results on two datasets and five baseline datasets show that EDMFEN has good performance when compared with the stateof-the-art LLIE methods.

저조도 환경에서 Visual SLAM을 위한 이미지 개선 방법 (Image Enhancement for Visual SLAM in Low Illumination)

  • 유동길;정지훈;전형준;한창완;박일우;오정현
    • 로봇학회논문지
    • /
    • 제18권1호
    • /
    • pp.66-71
    • /
    • 2023
  • As cameras have become primary sensors for mobile robots, vision based Simultaneous Localization and Mapping (SLAM) has achieved impressive results with the recent development of computer vision and deep learning. However, vision information has a disadvantage in that a lot of information disappears in a low-light environment. To overcome the problem, we propose an image enhancement method to perform visual SLAM in a low-light environment. Using the deep generative adversarial models and modified gamma correction, the quality of low-light images were improved. The proposed method is less sharp than the existing method, but it can be applied to ORB-SLAM in real time by dramatically reducing the amount of computation. The experimental results were able to prove the validity of the proposed method by applying to public Dataset TUM and VIVID++.

Single Image-based Enhancement Techniques for Underwater Optical Imaging

  • Kim, Do Gyun;Kim, Soo Mee
    • 한국해양공학회지
    • /
    • 제34권6호
    • /
    • pp.442-453
    • /
    • 2020
  • Underwater color images suffer from low visibility and color cast effects caused by light attenuation by water and floating particles. This study applied single image enhancement techniques to enhance the quality of underwater images and compared their performance with real underwater images taken in Korean waters. Dark channel prior (DCP), gradient transform, image fusion, and generative adversarial networks (GAN), such as cycleGAN and underwater GAN (UGAN), were considered for single image enhancement. Their performance was evaluated in terms of underwater image quality measure, underwater color image quality evaluation, gray-world assumption, and blur metric. The DCP saturated the underwater images to a specific greenish or bluish color tone and reduced the brightness of the background signal. The gradient transform method with two transmission maps were sensitive to the light source and highlighted the region exposed to light. Although image fusion enabled reasonable color correction, the object details were lost due to the last fusion step. CycleGAN corrected overall color tone relatively well but generated artifacts in the background. UGAN showed good visual quality and obtained the highest scores against all figures of merit (FOMs) by compensating for the colors and visibility compared to the other single enhancement methods.