DOI QR코드

DOI QR Code

지속적 학습 환경에서 효율적 경로 선택

Efficient Path Selection in Continuous Learning Environment

  • Park, Seong-Hyeon (Dept. of Embedded Systems Engineering, Incheon National University) ;
  • Kang, Seok-Hoon (Dept. of Embedded Systems Engineering, Incheon National University)
  • 투고 : 2021.07.29
  • 심사 : 2021.09.14
  • 발행 : 2021.09.30

초록

본 논문에서는, 지속적 학습 환경에서 효율적 경로 선택에 의한 LwF방법의 성능향상을 제안한다. 이를 위해 콘볼루션 레이어를 분리하는 방법을 사용하여 기존의 LwF와 성능 및 구조를 비교한다. 비교를 위해 복잡도가 다른 구성을 가진 MNIST, EMNIST, Fashion MNIST, CIFAR10 데이터를 사용하여 성능을 실험하였다. 실험결과, 각 태스크 별 정확도가 최대 20% 향상되었으며, LwF 기반의 지속적 학습 환경에서 치명적 망각 현상이 개선되었다.

In this paper, we propose a performance improvement of the LwF method using efficient path selection in Continuous Learning Environment. We compare performance and structure with conventional LwF. For comparison, we experiment with performance using MNIST, EMNIST, Fashion MNIST, and CIFAR10 data with different complexity configurations. Experiments show up to 20% improvement in accuracy for each task, which mitigating the Catastrophic Forgetting phenomenon in Continuous Learning environments.

키워드

과제정보

This work was supported by Incheon National University (International Cooperative) Research Grant in 2021 (2021-0089)

참고문헌

  1. Z. Li, and D. Hoiem, "Learning without forgetting," IEEE transactions on pattern analysis and machine intelligence, Vol.40, No.12, pp.2935-2947, 2017. DOI: 10.1109/TPAMI.2017.2773081
  2. IJ. Goodfellow, et al., "An empirical investigation of catastrophic forgetting in gradient-based neural networks," arXiv preprint arXiv:1312.6211, 2013.
  3. GI. Parisi, et al., "Continual lifelong learning with neural networks: A review," Neural Networks, Vol.113, pp.54-71, 2019. DOI: 10.1016/j.neunet.2019.01.012
  4. RM. French, "Catastrophic forgetting in connectionist networks," Trends in cognitive sciences, Vol.3, No.4, pp.128-135, 1999. DOI: 10.1016/S1364-6613(99)01294-2
  5. YC. Hsu, et al., "Re-evaluating continual learning scenarios: A categorization and case for strong baselines," arXiv preprint arXiv:1810.12488, 2018.
  6. K. McRae, and PA. Hetherington, "Catastrophic interference is eliminated in pretrained networks," In: Proceedings of the 15h Annual Conference of the Cognitive Science Society, pp.723-728, 1993.
  7. J. Kirkpatrick, et al., "Overcoming catastrophic forgetting in neural networks," Proceedings of the national academy of sciences, Vol.114, No.13, pp.3521-3526. 2017.
  8. G. Hinton, O. Vinyals, and J. Dean, "Distilling the knowledge in a neural network," in NIPS Workshop, 2014.
  9. F. Zenke, B. Poole, and S. Ganguli, "Continual learning through synaptic intelligence," In: Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, pp.3987-3995, 2017.
  10. SH. park, SH. Kang, "Continual Learning using Data Similarity," Journal of IKEEE, Vol.24, No.2, pp.514-522, 2020. DOI: 10.7471/ikeee.2020.24.2.514