• Title/Summary/Keyword: Restoration Image Model

Search Result 116, Processing Time 0.03 seconds

An Enhancement Method of Document Restoration Capability using Encryption and DnCNN (암호화와 DnCNN을 활용한 문서 복원능력 향상에 관한 연구)

  • Jang, Hyun-Hee;Ha, Sung-Jae;Cho, Gi-Hwan
    • Journal of Internet of Things and Convergence
    • /
    • v.8 no.2
    • /
    • pp.79-84
    • /
    • 2022
  • This paper presents an enhancement method of document restoration capability which is robust for security, loss, and contamination, It is based on two methods, that is, encryption and DnCNN(DeNoise Convolution Neural Network). In order to implement this encryption method, a mathematical model is applied as a spatial frequency transfer function used in optics of 2D image information. Then a method is proposed with optical interference patterns as encryption using spatial frequency transfer functions and using mathematical variables of spatial frequency transfer functions as ciphers. In addition, by applying the DnCNN method which is bsed on deep learning technique, the restoration capability is enhanced by removing noise. With an experimental evaluation, with 65% information loss, by applying Pre-Training DnCNN Deep Learning, the peak signal-to-noise ratio (PSNR) shows 11% or more superior in compared to that of the spatial frequency transfer function only. In addition, it is confirmed that the characteristic of CC(Correlation Coefficient) is enhanced by 16% or more.

A Study on the Restoration of Korean Traditional Palace Image by Adjusting the Receptive Field of Pix2Pix (Pix2Pix의 수용 영역 조절을 통한 전통 고궁 이미지 복원 연구)

  • Hwang, Won-Yong;Kim, Hyo-Kwan
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.5
    • /
    • pp.360-366
    • /
    • 2022
  • This paper presents a AI model structure for restoring Korean traditional palace photographs, which remain only black-and-white photographs, to color photographs using Pix2Pix, one of the adversarial generative neural network techniques. Pix2Pix consists of a combination of a synthetic image generator model and a discriminator model that determines whether a synthetic image is real or fake. This paper deals with an artificial intelligence model by adjusting a receptive field of the discriminator, and analyzes the results by considering the characteristics of the ancient palace photograph. The receptive field of Pix2Pix, which is used to restore black-and-white photographs, was commonly used in a fixed size, but a fixed size of receptive field is not suitable for a photograph which consisting with various change in an image. This paper observed the result of changing the size of the existing fixed a receptive field to identify the proper size of the discriminator that could reflect the characteristics of ancient palaces. In this experiment, the receptive field of the discriminator was adjusted based on the prepared ancient palace photos. This paper measure a loss of the model according to the change in a receptive field of the discriminator and check the results of restored photos using a well trained AI model from experiments.

A Study on Super Resolution Image Reconstruction for Effective Spatial Identification

  • Park Jae-Min;Jung Jae-Seung;Kim Byung-Guk
    • Spatial Information Research
    • /
    • v.13 no.4 s.35
    • /
    • pp.345-354
    • /
    • 2005
  • Super resolution image reconstruction method refers to image processing algorithms that produce a high resolution(HR) image from observed several low resolution(LR) images of the same scene. This method has proven to be useful in many practical cases where multiple frames of the same scene can be obtained, such as satellite imaging, video surveillance, video enhancement and restoration, digital mosaicking, and medical imaging. In this paper, we applied the super resolution reconstruction method in spatial domain to video sequences. Test images are adjacently sampled images from continuous video sequences and are overlapped at high rate. We constructed the observation model between the HR images and LR images applied with the Maximum A Posteriori(MAP) reconstruction method which is one of the major methods in the super resolution grid construction. Based on the MAP method, we reconstructed high resolution images from low resolution images and compared the results with those from other known interpolation methods.

  • PDF

Algorithm for Improving Visibility under Ambient Lighting Using Deep Learning (딥러닝을 이용한 외부 조도 아래에서의 시인성 향상 알고리즘)

  • Lee, Hee Jin;Song, Byung Cheol
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.808-811
    • /
    • 2022
  • Display under strong ambient lighting is perceived darker than it really is. Existing techniques for solving the problem in terms of software show limitations in that image enhancement techniques are applied regardless of ambient lighting or chrominance is not improved compared to luminance. Therefore, this paper proposes a visibility enhancement algorithm using deep learning to adaptively respond to ambient lighting values and an equation to restore optimal chrominance for luminance. The algorithm receives an ambient lighting value with the input image, and then applies a deep learning model and chrominance restoration equation to generate an image to minimize the difference between the degradation modeling of enhanced image and the input image. Qualitative evaluation proves that the algorithm shows excellent performance in improving visibility under strong ambient lighting through comparison of images applied with degradation modeling.

Deep learning-based monitoring for conservation and management of coastal dune vegetation (해안사구 식생의 보전 및 관리를 위한 딥러닝 기반 모니터링)

  • Kim, Dong-woo;Gu, Ja-woon;Hong, Ye-ji;Kim, Se-Min;Son, Seung-Woo
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.25 no.6
    • /
    • pp.25-33
    • /
    • 2022
  • In this study, a monitoring method using high-resolution images acquired by unmanned aerial vehicles and deep learning algorithms was proposed for the management of the Sinduri coastal sand dunes. Class classification was done using U-net, a semantic division method. The classification target classified 3 types of sand dune vegetation into 4 classes, and the model was trained and tested with a total of 320 training images and 48 test images. Ignored label was applied to improve the performance of the model, and then evaluated by applying two loss functions, CE Loss and BCE Loss. As a result of the evaluation, when CE Loss was applied, the value of mIoU for each class was the highest, but it can be judged that the performance of BCE Loss is better considering the time efficiency consumed in learning. It is meaningful as a pilot application of unmanned aerial vehicles and deep learning as a method to monitor and manage sand dune vegetation. The possibility of using the deep learning image analysis technology to monitor sand dune vegetation has been confirmed, and it is expected that the proposed method can be used not only in sand dune vegetation but also in various fields such as forests and grasslands.

An Analysis on the Properties of Features against Various Distortions in Deep Neural Networks

  • Kang, Jung Heum;Jeong, Hye Won;Choi, Chang Kyun;Ali, Muhammad Salman;Bae, Sung-Ho;Kim, Hui Yong
    • Journal of Broadcast Engineering
    • /
    • v.26 no.7
    • /
    • pp.868-876
    • /
    • 2021
  • Deploying deep neural network model training performs remarkable performance in the fields of Object detection and Instance segmentation. To train these models, features are first extracted from the input image using a backbone network. The extracted features can be reused by various tasks. Research has been actively conducted to serve various tasks by using these learned features. In this process, standardization discussions about encoding, decoding, and transmission methods are proceeding actively. In this scenario, it is necessary to analyze the response characteristics of features against various distortions that may occur in the data transmission or data compression process. In this paper, experiment was conducted to inject various distortions into the feature in the object recognition task. And analyze the mAP (mean Average Precision) metric between the predicted value output from the neural network and the target value as the intensity of various distortions was increased. Experiments have shown that features are more robust to distortion than images. And this points out that using the feature as transmission means can prevent the loss of information against the various distortions during data transmission and compression process.

Modeling Quantization Error using Laplacian Probability Density function (Laplacian 분포 함수를 이용한 양자화 잡음 모델링)

  • 최지은;이병욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.26 no.11A
    • /
    • pp.1957-1962
    • /
    • 2001
  • Image and video compression requires quantization error model of DCT coefficients for post processing, restoration or transcoding. Once DCT coefficients are quantized, it is impossible to recover the original distribution. We assume that the original probability density function (pdf) is the Laplacian function. We calculate the variance of the quantized variable, and estimate the variance of the DCT coefficients. We can confirm that the proposed method enhances the accuracy of the quantization error estimation.

  • PDF

PSF Deconvolution on the Integral Field Unit Spectroscopy Data

  • Chung, Haeun;Park, Changbom
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.58.4-58.4
    • /
    • 2019
  • We present the application of the Point Spread Function (PSF) deconvolution method to the astronomical Integral Field Unit (IFU) Spectroscopy data focus on the restoration of the galaxy kinematics. We apply the Lucy-Richardson deconvolution algorithm to the 2D image at each wavelength slice. We make a set of mock IFU data which resemble the IFU observation to the model galaxies with a diverse combination of surface brightness profile, S/N, line-of-sight geometry and Line-Of-Sight Velocity Distribution (LOSVD). Using the mock IFU data, we demonstrate that the algorithm can effectively recover the stellar kinematics of the galaxy. We also show that lambda_R_e, the proxy of the spin parameter can be correctly measured from the deconvolved IFU data. Implementation of the algorithm to the actual SDSS-IV MaNGA IFU survey data exhibits the noticeable difference on the 2D LOSVD, geometry, lambda_R_e. The algorithm can be applied to any other regular-grid IFS data to extract the PSF-deconvolved spatial information.

  • PDF

Postprocessing of Inter-Frame Coded Images Based on Convex Projection and Regularization (POCS와 정규화를 기반으로한 프레임간 압출 영사의 후처리)

  • Kim, Seong-Jin;Jeong, Si-Chang;Hwang, In-Gyeong;Baek, Jun-Gi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.58-65
    • /
    • 2002
  • In order to reduce blocking artifacts in inter-frame coded images, we propose a new image restoration algorithm, which directly processes differential images before reconstruction. We note that blocking artifact in inter-frame coded images is caused by both 8$\times$8 DCT and 16$\times$16 macroblock based motion compensation, while that of intra-coded images is caused by 8$\times$8 DCT only. According to the observation, we Propose a new degradation model for differential images and the corresponding restoration algorithm that utilizes additional constraints and convex sets for discontinuity inside blocks. The proposed restoration algorithm is a modified version of standard regularization that incorporate!; spatially adaptive lowpass filtering with consideration of edge directions by utilizing a part of DCT coefficients. Most of video coding standard adopt a hybrid structure of block-based motion compensation and block discrete cosine transform (BDCT). By this reason, blocking artifacts are occurred on both block boundary and block interior For more complete removal of both kinds of blocking artifacts, the restored differential image must satisfy two constraints, such as, directional discontinuities on block boundary and block interior Those constraints have been used for defining convex sets for restoring differential images.

Face Relighting Based on Virtual Irradiance Sphere and Reflection Coefficients (가상 복사조도 반구와 반사계수에 근거한 얼굴 재조명)

  • Han, Hee-Chul;Sohn, Kwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.13 no.3
    • /
    • pp.339-349
    • /
    • 2008
  • We present a novel method to estimate the light source direction and relight a face texture image of a single 3D model under arbitrary unknown illumination conditions. We create a virtual irradiance sphere to detect the light source direction from a given illuminated texture image using both normal vector mapping and weighted bilinear interpolation. We then induce a relighting equation with estimated ambient and diffuse coefficients. We provide the result of a series of experiments on light source estimation, relighting and face recognition to show the efficiency and accuracy of the proposed method in restoring the shading and shadows areas of a face texture image. Our approach for face relighting can be used for not only illuminant invariant face recognition applications but also reducing visual load and Improving visual performance in tasks using 3D displays.