Acknowledgement
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education(GR 2019R1D1A3A03103736) and in part was supported by project for Cooperative R&D between Industry, Academy, and Research Institute funded Korea Ministry of SMEs and Startups in 20(Grant No S3114049).
References
- C. Szegedy, W. Liu, Y. Jia, P. Sermane, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, "Going deeper with convolutions," in Proceedings of the Computer Vision and Pattern Recognition, pp.1-9, 2015.
- J. Wang, Y. Yang, J. Mao, Z. Huang, C. Huang, and W. Xu, "CNN-RNN: A unified framework for multi-label image classification," in Proceedings of the Computer Vision and Pattern Recognition, pp.2285-2294, 2016.
- K. Simonyan and A. Zisserman, "Two-stream convolutional networks for action recognition in videos," in Proceedings of the Neural Information Processing, pp.568-576, 2014.
- C. Feichtenhofer, A. Pinz, and A. Zisserman, "Convolutional two-stream network fusion for video action recognition," in Proceedings of the Computer Vision and Pattern Recognition, pp.1933-1941, 2016.
- A. Diba, A. Pazandeh, and L. V. Gool, "Efficient two-stream motion and appearance 3D CNNs for video classification," arXiv:1608.08851, 2016.
- G. Hinton, O. Vinyal, and J. Dean, "Distilling the knowledge in a neural network," in Neural Information Processing Deep Learning Workshop, 2014.
- S. Kong, T. Guo, S. You, and C. Xu, "Learning student networks with few data," in Proceedings of the AAAI Conference on Artificial Intelligence, Vol.34, No.4, pp.4469-4476, 2020.
- J. P. Bashivan, M. Tensen, and J. J. DiCarlo, "Teacher guided architecture search," in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp.5320- 5329, 2019.
- D. Shah, V. Trivedi, V. Sheth, A. Shah, and U. Chauhan, "ResTS: Residual deep interpretable architecture for plant disease detection," Information Processing in Agriculture, https://doi.org/10.1016/j.inpa.2021.06.001.
- C. Zach, T. Pock, and H. Bischof, "A duality based approach for realtime TV-L1 optical flow," in DAGM 2007: Pattern Recognition, Vol.4713, pp.214-223, 2007.
- H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre, "HMDB: A large video database for human motion recognition," in Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp.2556-2563, 2011.
- K. Soomro, A. R. Zamir, and M. Shah, "UCF101: A dataset of 101 human actions classes from videos in the wild," arXiv preprint arXiv:1212.0402, 2012.
- S. Sun, Z. Kuang, L. Sheng, W. Ouyang, and W. Zhang, "Optical flow guided feature: A fast and robust motion representation for video action recognition," in Proceedings of the Computer Vision and Pattern Recognition, pp.1-9, 2018.
- Y. Zhu, Z. Lan, S. Newsam, and A. G. Hauptmann, "Hidden two-stream convolutional networks for action recognition," arXiv preprint arXiv:1704.00389, 2017.
- J. Y.-H. Ng, J. Choi, J. Neumann, and L. S. Davis, "Actionflownet: Learning motion representation for action recognition," in IEEE Winter Conference on Applications of Computer Vision (WACV), pp.1616-1624, 2018.
- Y. Zhao and H. Lee,"FTSnet: A simple convolutional neural networks for action recognition," in Proceedings of the Annual Conference of KIPS(ACK) 2021, pp.878-879, 2021.
- K. He, X. Zhang, S. Ren, and J. Sun,"Deep residual learning for image recognition," in Proceedings of the Computer Vision and Pattern Recognition, pp.770-778, 2016.
- S. Xiao, J. Feng, J. Xing, H. Lai, S. Yan, and A. Kassim, "Robust facial landmark detection via recurrent attentive-refinement networks," in Proceedings of the European Conference on Computer Vision (ECCV), pp.57-72, 2016.
- Z. Wang, Q. She, and A. Smolic, "ACTION-Net: Multipath excitation for action recognition," in Proceedings of the Computer Vision and Pattern Recognition, pp.13214-13223, 2021.
- L. Wang, Z. Tong, B. Ji, and G. Wu, "TDN: Temporal difference networks for efficient action recognition," in Proceedings of the Computer Vision and Pattern Recognition, pp.1895-1904, 2021.
- T. Hui, X. Tang, and C. C. Loy, "A lightweight optical flow CNN-Revisiting data fidelity and regularization," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.43, No.8, pp.2555-2569, 2021. https://doi.org/10.1109/TPAMI.2020.2976928
- K. Luo, C. Wang, S. Liu, H. Fan, J. Wang, and J. Sun, "UPFlow: Upsampling pyramid for unsupervised optical flow learning," in Proceedings of the Computer Vision and Pattern Recognition, pp.1045-1054, 2021.