• Title/Summary/Keyword: Dilated Residual Network

Search Result 3, Processing Time 0.017 seconds

Detection and Localization of Image Tampering using Deep Residual UNET with Stacked Dilated Convolution

  • Aminu, Ali Ahmad;Agwu, Nwojo Nnanna;Steve, Adeshina
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.9
    • /
    • pp.203-211
    • /
    • 2021
  • Image tampering detection and localization have become an active area of research in the field of digital image forensics in recent times. This is due to the widespread of malicious image tampering. This study presents a new method for image tampering detection and localization that combines the advantages of dilated convolution, residual network, and UNET Architecture. Using the UNET architecture as a backbone, we built the proposed network from two kinds of residual units, one for the encoder path and the other for the decoder path. The residual units help to speed up the training process and facilitate information propagation between the lower layers and the higher layers which are often difficult to train. To capture global image tampering artifacts and reduce the computational burden of the proposed method, we enlarge the receptive field size of the convolutional kernels by adopting dilated convolutions in the residual units used in building the proposed network. In contrast to existing deep learning methods, having a large number of layers, many network parameters, and often difficult to train, the proposed method can achieve excellent performance with a fewer number of parameters and less computational cost. To test the performance of the proposed method, we evaluate its performance in the context of four benchmark image forensics datasets. Experimental results show that the proposed method outperforms existing methods and could be potentially used to enhance image tampering detection and localization.

A Pansharpening Algorithm of KOMPSAT-3A Satellite Imagery by Using Dilated Residual Convolutional Neural Network (팽창된 잔차 합성곱신경망을 이용한 KOMPSAT-3A 위성영상의 융합 기법)

  • Choi, Hoseong;Seo, Doochun;Choi, Jaewan
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_2
    • /
    • pp.961-973
    • /
    • 2020
  • In this manuscript, a new pansharpening model based on Convolutional Neural Network (CNN) was developed. Dilated convolution, which is one of the representative convolution technologies in CNN, was applied to the model by making it deep and complex to improve the performance of the deep learning architecture. Based on the dilated convolution, the residual network is used to enhance the efficiency of training process. In addition, we consider the spatial correlation coefficient in the loss function with traditional L1 norm. We experimented with Dilated Residual Networks (DRNet), which is applied to the structure using only a panchromatic (PAN) image and using both a PAN and multispectral (MS) image. In the experiments using KOMPSAT-3A, DRNet using both a PAN and MS image tended to overfit the spectral characteristics, and DRNet using only a PAN image showed a spatial resolution improvement over existing CNN-based models.

Light Field Angular Super-Resolution Algorithm Using Dilated Convolutional Neural Network with Residual Network (잔차 신경망과 팽창 합성곱 신경망을 이용한 라이트 필드 각 초해상도 기법)

  • Kim, Dong-Myung;Suh, Jae-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.12
    • /
    • pp.1604-1611
    • /
    • 2020
  • Light field image captured by a microlens array-based camera has many limitations in practical use due to its low spatial resolution and angular resolution. High spatial resolution images can be easily acquired with a single image super-resolution technique that has been studied a lot recently. But there is a problem in that high angular resolution images are distorted in the process of using disparity information inherent among images, and thus it is difficult to obtain a high-quality angular resolution image. In this paper, we propose light field angular super-resolution that extracts an initial feature map using an dilated convolutional neural network in order to effectively extract the view difference information inherent among images and generates target image using a residual neural network. The proposed network showed superior performance in PSNR and subjective image quality compared to existing angular super-resolution networks.