DOI QR코드

DOI QR Code

Analysis of Weights and Feature Patterns in Popular 2D Deep Neural Networks Models for MRI Image Classification

  • Khagi, Bijen (Dept. of Information and Communication Engineering, Chosun University) ;
  • Kwon, Goo-Rak (Dept. of Information and Communication Engineering, Chosun University)
  • Received : 2022.08.31
  • Accepted : 2022.09.23
  • Published : 2022.09.30

Abstract

A deep neural network (DNN) includes variables whose values keep on changing with the training process until it reaches the final point of convergence. These variables are the co-efficient of a polynomial expression to relate to the feature extraction process. In general, DNNs work in multiple 'dimensions' depending upon the number of channels and batches accounted for training. However, after the execution of feature extraction and before entering the SoftMax or other classifier, there is a conversion of features from multiple N-dimensions to a single vector form, where 'N' represents the number of activation channels. This usually happens in a Fully connected layer (FCL) or a dense layer. This reduced 2D feature is the subject of study for our analysis. For this, we have used the FCL, so the trained weights of this FCL will be used for the weight-class correlation analysis. The popular DNN models selected for our study are ResNet-101, VGG-19, and GoogleNet. These models' weights are directly used for fine-tuning (with all trained weights initially transferred) and scratch trained (with no weights transferred). Then the comparison is done by plotting the graph of feature distribution and the final FCL weights.

Keywords

Acknowledgement

This study was supported by research funds from Chosun University, 2022.

References

  1. N. Tajbakhsh, J. Y. Shin, R. Suryakanth, R. Gurudu, R. T. Hurst, and C. B. Kendall, et al., "Convolutional neural networks for medical image analysis: Full training or fine tuning?," arXiv, vol. 35, no. 5, pp. 1299-1312, 2017.
  2. S. Bozinovski, "Reminder of the first paper on transfer learning in neural networks, 1976," Informatica, vol. 44, no. 3, pp. 291-302, 2020. https://doi.org/10.31449/inf.v44i3.2828
  3. A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, "CNN features off-the-shelf: An astounding baseline for recognition," IEEE Computure Society Confernce Computer Visioin Pattern Recognition Work, pp. 512-519, 2014.
  4. B. Cheng, M. Liu, D. Shen, Z. Li, and D. Zhang, "Multidomain transfer learning for early diagnosis of Alzheimer's disease," Neuroinformatics, vol. 15, no. 2, pp. 115-132, 2017. https://doi.org/10.1007/s12021-016-9318-5
  5. B. Khagi, C. G. Lee, and G. R. Kwon, "Alzheimer's disease classification from brain MRI based on transfer learning from CNN," in BMEiCON 2018 - 11th Biomed. Engineering International Confernece, 2019.
  6. A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," Advances in Neural Information Processing System, vol. 25, pp. 1097-1105, 2012.
  7. K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770-778.
  8. K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv Prepr. arXiv1409.1556, 2014.
  9. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, and D. Anguelov, et al., "Going deeper with convolutions," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1-9.
  10. A. Labatie, "Characterizing weil-behaved vs. pathological deep neural networks," in 36th International Confernece Machine Learning ICML 2019, Jun. 2019. pp. 6396-6406,
  11. S. Ren, K. He, R. Girshick, and J. Sun, "Faster R-CNN: towards real-time object detection with region proposal networks," IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137-1149, 2017. https://doi.org/10.1109/TPAMI.2016.2577031
  12. H. C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, and I. Nogues, et al., "Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning," IEEE Transaction on Medical Imaging, vol. 35, no. 5, pp. 1285-1298, 2016. https://doi.org/10.1109/TMI.2016.2528162
  13. X. Glorot and Y. Bengio, "Understanding the difficulty of training deep feedforward neural networks," in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010, pp. 249-256.
  14. L. F. FEI and J. DENG, "Where have we been ? Where are we going ? The beginning: CVPR 2009," Imagenet Work, 2017.