References
- R. M. French, "Catastrophic forgetting in connectionist networks," Trends in cognitive sciences, vol.3, no.4, pp.128-135, 1999. DOI: 10.1016/s1364-6613(99)01294-2
- G. I. Parisi, R. Kemker, J. L. Part, C. Kanan, and S. Wermter, "Continual lifelong learning with neural networks: A review," Neural Networks, vol.113, pp.54-71, 2019. DOI: 10.1016/j.neunet.2019.01.012
- F. Zenke, B. Poole, and S. Ganguli, "Continual learning through synaptic intelligence," Proceedings of the 34th International Conference on Machine Learning, vol 70, pp.3987-3995, 2017. DOI: 10.5555/3305890.3306093
- Y. Hsu, Y. Liu, A. Ramasamy, and Z. Kira, "Re-evaluating continual learning scenarios: A categorization and case for strong baselines," arXiv:1810.12488, 2019.
- J. Yoon, E. Yang, J. Lee, and S. J. Hwang, "Lifelong learning with dynamically expandable networks," arXiv:1708.01547, 2017.
- H. Shin, J. K. Lee, J. Kim, and J. Kim, "Continual learning with deep generative replay," arXiv: 1705.08690, 2017.
- G. Hinton, O. Vinyals, and J. Dean, "Distilling the knowledge in a neural network," NIPS Workshop, arXiv:1503.02531, 2014.
- K. McRae, and PA. Hetherington, "Catastrophic interference is eliminated in pretrained networks," Proceedings of the 15h Annual Conference of the Cognitive Science Society, pp.723-728, 1993. DOI: 10.1.1.30.4449 https://doi.org/10.1.1.30.4449
- R. M. French, "Catastrophic forgetting in connectionist networks," Trends in cognitive sciences 3.4, pp.128-135, 1999. DOI: 10.1016/S1364-6613(99)01294-2
- J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, D. Hassabis, C. Clopath, D. Kumaran, and R. Hadsell, "Overcoming catastrophic forgetting in neural networks," Proceedings of the national academy of sciences, vol.114, no.13, pp.3521-3526, 2017. https://doi.org/10.1073/pnas.1611835114
- Z. Li and D. Hoiem, "Learning without forgetting", IEEE transactions on pattern analysis and machine intelligence, vol.40, no.12, pp.2935- 2947, 2017. DOI: 10.48550/arXiv.1612.00796
- S. Zagoruyko, and N. Komodakis, "Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer," arXiv:1612.03928, 2016.
- B. Heo, M. Lee, S Yun, and JY. Choi, "Knowledge transfer via distillation of activation boundaries formed by hidden neurons," Proceedings of the AAAI Conference on Artificial Intelligence, Vol.33, No.1, pp.3779-3787, 2019. DOI: 10.48550/arXiv.1811.03233