Browse > Article
http://dx.doi.org/10.23087/jkicsp.2022.23.3.007

Performance Improvement of SRGAN's Discriminator via Mutual Distillation  

Yeojin Lee (Division of Electronics and Communications Engineering, Pukyong National University)
Hanhoon Park (Division of Electronics and Communications Engineering, Pukyong National University)
Publication Information
Journal of the Institute of Convergence Signal Processing / v.23, no.3, 2022 , pp. 160-165 More about this Journal
Abstract
Mutual distillation is a knowledge distillation method that guides a cohort of neural networks to learn cooperatively by transferring knowledge between them, without the help of a teacher network. This paper aims to confirm whether mutual distillation is also applicable to super-resolution networks. To this regard, we conduct experiments to apply mutual distillation to the discriminators of SRGANs and analyze the effect of mutual distillation on improving SRGAN's performance. As a result of the experiment, it was confirmed that SRGANs whose discriminators shared their knowledge through mutual distillation can produce super-resolution images enhanced in both quantitative and qualitative qualities.
Keywords
Super-resolution; Deep learning; Knowledge distillation; Mutual distillation; SRGAN; Discriminator;
Citations & Related Records
Times Cited By KSCI : 2  (Citation Analysis)
연도 인용수 순위
1 C. Ledig, et al., "Photo-realistic single image superresolution using a generative adversarial network," Proc. of CVPR, pp. 105-114, 2017.
2 G. Hinton, et al., "Distilling the knowledge in a neural network," Proc. of NIPS, 2014.
3 Y. Zhang, et al.,"Deep mutual learning," Proc. of CVPR, pp. 4320-4328, 2018.
4 A. Romero, et al., "FitNets: hints for thin deep nets," Proc. of ICLR, 2015.
5 N. Komodakis and S. Zagoruyko, "Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer," Proc. of ICLR, 2017.
6 L. Zhang, J. Song, A. Gao, J. Chen, C. Bao, and K. Ma, "Be your own teacher: improve the performance of convolutional neural networks via self distillation," Proc. of ICCV, pp. 3713-3722, 2019.
7 Z. He, et al., "Fakd: feature-affinity based knowledge distillation for efficient image super-resolution," Proc. of ICIP, pp. 518-522, 2020.
8 C. Dong, C. C. Loy, K. He, and X. Tang, "Image super-resolution using deep convolutional networks," Proc. of ECCV, pp. 184-199, 2014.
9 I. J. Goodfellow, et al., "Generative adversarial networks," arXiv preprint arXiv:1406.2661, 2014.
10 K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint, arXiv:1409.1556, 2014.
11 X. Wang, et al., "ESRGAN: enhanced super resolution generative adversarial networks," Proc. of ECCV, pp. 63-79, 2018.
12 Y. Choi and H. Park, "Improving ESRGAN with an additional image quality loss," Multimedia Tools and Applications, 2022.
13 E. Agustsson and R. Timofte, "Ntire 2017 challenge on single image super-resolution: dataset and study," Proc. of CVPRW, pp. 126-135, 2017.
14 M. Bevilacqua, et al., "Low complexity single image super-resolution based on nonnegative neighbor embedding," Proc. BMVC, 2012.
15 R. Zeyde, et al., "On single image scale-up using sparse-representations," Proc. of Int. Conf. on Curves and Surfaces, pp. 711-730, 2010.
16 D. Martin, et al., "A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics," Proc. ICCV, vol. 2, pp. 416-423, 2001.
17 A. Mittal, et al., "Making a completely blind image quality analyzer," IEEE Signal Processing Letters, vol. 20, no. 3, pp. 209-212, 2013.   DOI
18 J.-B. Huang, et al., "Single image super-resolution from transformed self-exemplars," Proc. of CVPR, pp. 5197-5206, 2015.
19 C. Ma, et al., "Learning a no-Reference quality metric for single-image super-resolution," CVIU, vol. 158, pp. 1-16, 2017.