DOI QR코드

DOI QR Code

Learning Domain Invariant Representation via Self-Rugularization

자기 정규화를 통한 도메인 불변 특징 학습

  • Hyun, Jaeguk (The 1st R&D Institute, Agency for Defense Development) ;
  • Lee, ChanYong (The 1st R&D Institute, Agency for Defense Development) ;
  • Kim, Hoseong (The 1st R&D Institute, Agency for Defense Development) ;
  • Yoo, Hyunjung (The 1st R&D Institute, Agency for Defense Development) ;
  • Koh, Eunjin (The 1st R&D Institute, Agency for Defense Development)
  • 현재국 (국방과학연구소 미사일연구원) ;
  • 이찬용 (국방과학연구소 미사일연구원) ;
  • 김호성 (국방과학연구소 미사일연구원) ;
  • 유현정 (국방과학연구소 미사일연구원) ;
  • 고은진 (국방과학연구소 미사일연구원)
  • Received : 2021.03.04
  • Accepted : 2021.06.04
  • Published : 2021.08.05

Abstract

Unsupervised domain adaptation often gives impressive solutions to handle domain shift of data. Most of current approaches assume that unlabeled target data to train is abundant. This assumption is not always true in practices. To tackle this issue, we propose a general solution to solve the domain gap minimization problem without any target data. Our method consists of two regularization steps. The first step is a pixel regularization by arbitrary style transfer. Recently, some methods bring style transfer algorithms to domain adaptation and domain generalization process. They use style transfer algorithms to remove texture bias in source domain data. We also use style transfer algorithms for removing texture bias, but our method depends on neither domain adaptation nor domain generalization paradigm. The second regularization step is a feature regularization by feature alignment. Adding a feature alignment loss term to the model loss, the model learns domain invariant representation more efficiently. We evaluate our regularization methods from several experiments both on small dataset and large dataset. From the experiments, we show that our model can learn domain invariant representation as much as unsupervised domain adaptation methods.

Keywords

References

  1. Y. Ganin, V. Lempitsky, "Unsupervised Domain Adaptation by Backpropagation," International Conference on Machine Learning, pp. 1180-1189, 2015.
  2. J. Deng, W. Dong, R. Socher, L. Li, K. Li, L. Fei-Fei, "Imagenet: A Large-Scale Hierarchical Image Database," IEEE Conference on Computer Vision and Pattern Recognition, pp. 248-255, 2009.
  3. R. Geirhos, P. Rubisch, C. Michaelis, M. Bethge, F. A. Wichmann, W. Brendel, "ImageNet-Trained CNNs are Biased Towards Texture; Increasing Shape Bias Improves Accuracy and Robustness," arXiv Preprint arXiv:1811.12231, 2018.
  4. X. Peng, B. Usman, N. Kaushik, D. Wang, J. Hoffman, K. Saenko, "Visda: A Synthetic-to-Real Benchmark for Visual Domain Adaptation," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 2021-2026, 2018.
  5. E. Tzeng, J. Hoffman, K. Saenko, T. Darrell, "Adversarial Discriminative Domain Adaptation," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7167-7176, 2017.
  6. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, "Generative Adversarial Nets," Advances in neural Information Processing Systems, pp. 2627-2680, 2014.
  7. K. Bousmalis, N. Siberman, D. Dohan, D. Erhan, D. Krishnan, "Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3722-3731, 2017.
  8. J. Hoffman, E. Tzeng, T. Park, J. Y. Zhu, P. Isola, K. Saenko, A. A. Efros, T. Darrell, "Cycada: Cycle-Consistent Adversarial Domain Adaptation," International Conference on Machine Learning, pp. 1989-1998, 2018.
  9. S. Motiian, Q. Jones, S. Iranmanesh, G. Doretto, "Few-Shot Adversarial Domain Adaptation," Advances in Neural Information Processing Systems, pp. 6670-6680, 2017.
  10. X. Xu, X. Zhou, R. Venkatesan, G. Swaminathan, O. Majumber, "d-sne: Domain Adaptation Using Stochastic Neighbor Embedding," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2497-2506, 2019.
  11. D. Li, Y. Yang, Y. Z., T. M. Hospedales, "Deeper Borader and Artier Domain Generalization," Proceedings of the IEEE International Conference on Computer Vision, pp. 5542-5550, 2017.
  12. Y. Balaji, S. Sankaranarayanan, R. Chellappa, "Metareg: Towards Domain Generalization Using Meta-Regularization," Advances in Neural Information Processing Systems, pp. 998-1008, 2018.
  13. L. A. Gatys, A. S. Ecker, M. Bethge, "Image Style Transfer Using Convolutional Neural Networks," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414-2423 2016.
  14. J. Johnson, A. Alahi, L. Fei-Fei, "Perceptual Losses for Real-Time Style Transfer and Super-Resolution," European Conference on Computer Vision, pp. 694-711, 2016.
  15. X. Huang, S. Belongie, "Arbitrary Style Transfer in Real-Time with Adapive Instance Normalization," Proceedings of the IEEE International Conference on Computer Vision, pp. 1501-1510, 2017.
  16. Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, M. H. Yang, "Universal Style Transfer via Feature Transforms," Advances in Neural Information Processing Systems, pp. 386-396, 2017.
  17. M. D. Zeiler, R. Fergus, "Visualizing and Understanding Convolutional Networks," European Conference on Computer Vision, pp. 813-833, 2014.
  18. T. Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Piotr, C. L. Zitnick, "Microsoft Coco: Common Objects in Context," European Conference on Computer Vision, pp. 740-755, 2014.
  19. K. Simonyan, A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," arXiv preprint arXiv:1409.1556, 2014.
  20. P. Arbelaez, M. Maire, C. Fowlkes, J. Malik, "Contour Detection and Hierarchical Image Segmentation," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 33, No. 5, pp. 898-916, 2010. https://doi.org/10.1109/TPAMI.2010.161
  21. K. He, X. Zhang, S. Ren, J. Sun, "Deep Residual Learning for Image Recognition," Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778, 2016.
  22. B. Sun, K. Saenko, "Deep Coral: Correlation Alignment for Deep Domain Adaptaion," European Conference on Computer Vision, pp. 443-450, 2016.
  23. M. Long, Y. Cao, J. Wang, M. Jordan, "Learning Transferable Features with Deep Adaptation Networks," International Conference on Machine Learning, pp. 97-105, 2015.
  24. Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, Y. A. Ng, "Reading Digits in Natural Images with Unsupervised Feature Learning," NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011, 2011.
  25. M. Long, H. Zhu, J. Wang, M. I. Jordan, "Unsupervised Domain Adaptation with Residual Transfer Networks," Advances in Neural Information Processing Systems, pp. 136-144, 2016.
  26. S. Lee, D. Kim, S. G. Jeong, "Drop to Adapt: Learning Discriminative Features for Unsupervised Domain Adaptation," Proceedings of the IEEE International Conference on Computer Vision, pp. 91-100, 2019.
  27. A) Nichol, Kiri, 2016. "Painter by numbers, wikiart," www.kaggle.com/c/painter-by-numbers/data (accessed by 2016).
  28. Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, "Gradient-based Learning Applined to Document Recognition," Proceddings of the IEEE, Vol. 86, No. 11, pp. 2278-2324, 1998. https://doi.org/10.1109/5.726791
  29. D. P. Kingma, J. L. Ba, "Adam: A Method for Stochastic Optimization," Proceedings of the International Conference on Learning Representations, arXiv:1412.6980, 2015.
  30. R. Shu, H. H. Bui, H. Narui, & S. Ermon, "A Dirt-t Approach to Unsupervised Domain Adaptation," Proceedings of the International Conference on Learning Representations, arXiv:1802.08735, 2018.
  31. M. Ghifary, W. B. Kleijn, M. Zhang, D. Balduzzi, W. Li, "Deep Reconstruction-Classification Networks for Unsupervised Domain Adaptation," In European Conference on Computer Vision, pp. 597-613, Springer, 2016.
  32. L. Van der Maaten, G. Hinton, "Visualizing Data Using t-SNE," Journal of Machine Learning Research, 9(11), 2008.