Browse > Article
http://dx.doi.org/10.9717/kmms.2022.25.4.600

Anchor Free Object Detection Continual Learning According to Knowledge Distillation Layer Changes  

Gang, Sumyung (Dept. of Computer Engineering, Graduate School, Keimyung University)
Chung, Daewon (Mathematics Major, Faculty of Basic Sciences, Keimyung University)
Lee, Joon Jae (Faculty of Computer Engineering, Keimyung University)
Publication Information
Abstract
In supervised learning, labeling of all data is essential, and in particular, in the case of object detection, all objects belonging to the image and to be learned have to be labeled. Due to this problem, continual learning has recently attracted attention, which is a way to accumulate previous learned knowledge and minimize catastrophic forgetting. In this study, a continaul learning model is proposed that accumulates previously learned knowledge and enables learning about new objects. The proposed method is applied to CenterNet, which is a object detection model of anchor-free manner. In our study, the model is applied the knowledge distillation algorithm to be enabled continual learning. In particular, it is assumed that all output layers of the model have to be distilled in order to be most effective. Compared to LWF, the proposed method is increased by 23.3%p mAP in 19+1 scenarios, and also rised by 28.8%p in 15+5 scenarios.
Keywords
Continual Learning; Object Detection; Knowledge Distillation; Anchor Free;
Citations & Related Records
연도 인용수 순위
  • Reference
1 M. Masana, X. Liu, B. Twardowski, M. Menta, A.D. Bagdanov, and J. van de Weijer, "Class-incremental Learning: Survey and Per-formance Evaluation on Image Classification," arXiv Preprint, arXiv:2010.15277, pp. 1-26, 2020.
2 Z. Li and D. Hoiem, "Learning without Forgetting," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 40, No. 12, pp. 2935-2947, 2017.   DOI
3 U. Michieli and P. Zanuttigh, "Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1114-1124, 2021.
4 K. Shmelkov, C. Schmid, and K. Alahari, "Incremental Learning of Object Detectors without Catastrophic Forgetting," Proceedings of the IEEE International Conference on Computer Vision, pp. 3420- 3429, 2017.
5 X. Zhou, D. Wang, and P. Krahenbuhl, "Objects as Points," arXiv P reprint, arXiv:1904.07850, 2019.
6 Z. Tian, C. Shen, H. Chen, and T. He, "FCOS: A Simple and Strong Anchor-free Object Detector," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 44, Issue 4, pp. 1922-1933, 2020.
7 U. Michieli and P. Zanuttigh, "Incremental Learning Techniques for Semantic Segmentation," Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, pp. 1-8, 2019.
8 U. Michieli and P. Zanuttigh, "Knowledge Distillation for Incremental Learning in Semantic Segmentation," Computer Vision and Image Understanding, Vol. 205, 103167, pp.1-16, 2021.
9 GitHub - xingyizhou/CenterNet: Object detection, 3D detection, and pose estimation using center point detection, https://github.com/xingyizhou/CenterNet (accessed Oct. 24, 2021).
10 Z. Chen and B. Liu, "Lifelong Machine Learning," Synthesis Lectures on Artificial Intelligence and Machine Learning, Vol. 12, No. 3, pp. 1-207, 2018.
11 H. Yoon and J. Lee, "PCB Component Classification Algorithm Based on YOLO Network for PCB Inspection," Journal of Korea Multimedia Society, Vol. 24, No. 8, pp. 988-999, 2021.   DOI
12 S. Gang, D. Chung, and J.J. Lee, "Knowledge Distillation Based Continual Learning for PCB Part Detection," Journal of Korea Multimedia Society, Vol. 24, No. 7, pp. 868-879, 2021.   DOI
13 G. Hinton, O. Vinyals, and J. Dean, "Distilling the Knowledge in a Neural Network Geoffrey," arXiv P reprint, arXiv:1503.02531, 2015.
14 C. Peng, K. Zhao, S. Maksoud, M. Li, and B. C. Lovell, "SID: Incremental Learning for Anchor-free Object Detection via Selective and Inter-related Distillation," Computer Vision and Image Understanding, Vol. 210, 103229, pp. 1-8, 2021.