Browse > Article
http://dx.doi.org/10.7236/JIIBC.2020.20.1.239

Technique Proposal to Stabilize Lipschitz Continuity of WGAN Based on Regularization Terms  

Hahn, Hee-Il (Dept. Information and Communications Eng., College of Engineering, Hankuk University of Foreign Studies)
Publication Information
The Journal of the Institute of Internet, Broadcasting and Communication / v.20, no.1, 2020 , pp. 239-246 More about this Journal
Abstract
The recently proposed Wasserstein generative adversarial network (WGAN) has improved some of the tricky and unstable training processes that are chronic problems of the generative adversarial network(GAN), but there are still cases where it generates poor samples or fails to converge. In order to solve the problems, this paper proposes algorithms to improve the sampling process so that the discriminator can more accurately estimate the data probability distribution to be modeled and to stably maintain the discriminator should be Lipschitz continuous. Through various experiments, we analyze the characteristics of the proposed techniques and verify their performances.
Keywords
Generative Model; Lipschitz Continuity; Training Stability; Wasserstein Distance; Wasserstein GAN;
Citations & Related Records
Times Cited By KSCI : 6  (Citation Analysis)
연도 인용수 순위
1 B.S. Kim and I.H. Lee, "Retinal blood vessel segmentation using deep learning," Journal of KIIT. Vol. 17, No. 5, pp. 77-82, 2019. DOI:http://dx.doi.org/10.14801/jkiit.2019.17.5.77
2 C.I. Woo and E.H. Goo, "A study on integrity verification and tamper detection of digital image," Journal of the Korea Academia-Industrial Cooperation Society, Vol. 20, No. 10, pp. 203-208, 2019. DOI: https://doi.org/10.5762/KAIS.2019.20.10.203
3 I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial nets," In Advances in Neural Information Procesing, 2014.
4 M. Arjovsky and L. Bottou, "Towards principled methods for training generative adversarial networks," arXiv:1701.04862v1 [stat, ML], Jan. 2017.
5 D.P. Kingma and M. Welling, "Auto-encoding variational bayes," In Proceedings of the International Conference on Learning Representations(ICLR), 2014.
6 A. Radford, "Unsupervised representation learning with deep convolutional generative adversarial networks," arXiv:1511.06434v2 [cs.LG], Jan. 2016.
7 M. Arjovsky, S. Chintala, and L. Bottou, "Wasserstein gan," arXiv:1701.07875v3 [stat.ML], Dec. 2017.
8 I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin and A. Courville, "Improved training of Wasserstein gans," arXiv:1704.00028v3 [cs.LG], Dec. 2017.
9 X. Wei, B. Gong, Z. Liu, W. Lu and L. Wang, "Improving the improved training of Wasserstein gans," arXiv:1803.01541v1 [cs.CV], Mar. 2018.
10 C. Villani, Optimal Transport: old and new, Vol. 338, Springer Science & Business Media, 2008.
11 S. Lee, H. Kim, H. Seok, and J. Nang, "Comparison of fine-tuned convolutional neural networks for clipart style classification," International Journal of Internet, Broadcasting and Communication, Vol. 9, No. 4, pp. 1-7, 2017. DOI: https://doi.org/10.7236/IJIBC.2017.9.4.1.   DOI
12 Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, Vol. 86, No. 11, pp. 2278-2234, 1998. DOI: http://dx.doi.org/10.1109/5.726791   DOI
13 A. Krizhevsky, "Learning multiple layers of features from tiny images," Technical Report TR-2009, University of Toronto, 2009.
14 T. Salimans, I. Goodfellowm W. Zaremba, V. Cheung, A. Radford, and X. Chen, "Improved techniques for training gan," In Advances in Neural Information Processing Systems, pp. 2226-2234, 2016.