Acknowledgement
이 논문은 본 논문은 2019학년도 인제대학교 학술연구조성비 보조에 의한 것임.
References
- Chapelle, O., Scholkopf, B., and Zien, A. (2006). Semi-supervised learning, Cambridge, Massachusetts: MIT Press.
- Cubuk, E. D., Zoph, B., Mane, D., Vasudevan, V., and Le, Q. V. (2018). AutoAugment: Learning augmentation policies from data, arXiv:1805.09501.
- Cubuk, E. D., Zoph, B., Shlens, J, and Quoc, V. L. (2019). RandAugment: Practical data augmentation with no separate search. arXiv:1909.13719.
- Grandvalet, Y. and Bengio, Y. (2004). Semi-supervised learning by entropy minimization. In Advances in Neural Information Processing Systems, 529-536.
- Freund, Y. and Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55, 119-139. https://doi.org/10.1006/jcss.1997.1504
- Laine, S. and Aila, T. (2016). Temporal ensembling for semi-supervised learning. arXiv:1610.02242.
- Miyato, T., Maeda, S., Ishii, S., and Koyama, M. (2018). Virtual adversarial training: a regularization method for supervised and semi-supervised learning, IEEE Transactions on Pattern Analysis and Machine Intelligence, 1979-1993.
- Oliver, A., Odena, A., Raffel, C., Cubuk, E. D., and Goodfellow, I. (2018). Realistic evaluation of deep semisupervised learning algorithms, In Advances in Neural Information Processing Systems, 3235-3246.
- Tarvainen, A. and Valpola, H. (2017). Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in Neural Information Processing Systems, 1195-1204.
- Xie, Q., Dai, Z., Hovy, E., Luong, M. T., and Le, Q. V. (2019). Unsupervised data augmentation for consistency training, arXiv:1904.12848.
- Zagoruyko, S. and Komodakis, N. (2016). Wide residual networks, arXiv:1605.07146.