Browse > Article
http://dx.doi.org/10.23087/jkicsp.2022.23.4.002

Comparison of Adversarial Example Restoration Performance of VQ-VAE Model with or without Image Segmentation  

Tae-Wook Kim (Division of Software, Yonsei University)
Seung-Min Hyun (Department of Computer & Telecommunications Engineering, Yonsei University)
Ellen J. Hong (Division of Software, Yonsei University)
Publication Information
Journal of the Institute of Convergence Signal Processing / v.23, no.4, 2022 , pp. 194-199 More about this Journal
Abstract
Preprocessing for high-quality data is required for high accuracy and usability in various and complex image data-based industries. However, when a contaminated hostile example that combines noise with existing image or video data is introduced, which can pose a great risk to the company, it is necessary to restore the previous damage to ensure the company's reliability, security, and complete results. As a countermeasure for this, restoration was previously performed using Defense-GAN, but there were disadvantages such as long learning time and low quality of the restoration. In order to improve this, this paper proposes a method using adversarial examples created through FGSM according to image segmentation in addition to using the VQ-VAE model. First, the generated examples are classified as a general classifier. Next, the unsegmented data is put into the pre-trained VQ-VAE model, restored, and then classified with a classifier. Finally, the data divided into quadrants is put into the 4-split-VQ-VAE model, the reconstructed fragments are combined, and then put into the classifier. Finally, after comparing the restored results and accuracy, the performance is analyzed according to the order of combining the two models according to whether or not they are split.
Keywords
VAE; VQ-VAE; FGSM; Adversarial Attack; Adversarial Example; Defense-VAE;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Pouya Samangouei, Maya Kabkab, Rama Chellappa. (2018). Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models. ICLR 2018, arXiv:1805.06605
2 Xiang Li, Shihao Ji. (2019). Defense-VAE: A Fast and Accurate Defense against Adversarial Attacks. MLCS 2019, arXiv:1812.06570
3 Aaron van den Oord, Oriol Vinyals, & Koray Kavukcuoglu. (2017). Neural Discrete Representation Learning. NIPS 2017, arXiv:1711.00937
4 Deng, L. (2012). The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6), 141-142.   DOI
5 Ian J. Goodfellow, Jonathon Shlens & Christian Szegedy. (2015). Explaining and Harnessing Adversarial Examples. ICLR 2015, arXiv:1412.6572