DOI QR코드

DOI QR Code

Assessment of ASPECTS from CT Scans using Deep Learning

  • Khanh, Trinh Le Ba (Dept of Electronics and Computer Engineering, Chonnam National University) ;
  • Baek, Byung Hyun (Department of Radiology, Chonnam National University Hospital) ;
  • Kim, Seul Kee (Department of Radiology, Chonnam National University Hwasun Hospital) ;
  • Do, Luu-Ngoc (Chonnam University Research Institute of Medical Sciences, Chonnam National University) ;
  • Yoon, Woong (Department of Radiology, Chonnam National University Medical School and Hospital) ;
  • Park, Ilwoo (Department of Radiology, Chonnam National University Medical School and Hospital) ;
  • Yang, Hyung-Jeong (Dept of Electronics and Computer Engineering, Chonnam National University)
  • Received : 2019.03.08
  • Accepted : 2019.05.08
  • Published : 2019.05.31

Abstract

Alberta Stroke Program Early Computed Tomographic Scoring (ASPECTS) is a 10-point CT-scan score designed to quantify early ischemic changes in patients with acute ischemic stroke. However, an assessment of ASPECTS remains a challenge for neuroradiologists in stroke centers. The purpose of this study is to develop an automated ASPECTS scoring system that provides decision-making support by utilizing binary classification with three-dimensional convolutional neural network to analyze CT images. The proposed method consists of three main steps: slice filtering, contrast enhancement and image classification. The experiments show that the obtained results are very promising.

Keywords

1. INTRODUCTION

 The diagnosis and management of acute ischemic stroke has become a major concern in the medical field since stroke is responsible for 5% of deaths annually [1]. Non-contrast computed tomographic (CT) images are most widely used in the diagnosis of stroke because of its fast scan time and low-cost assessment of affected ischemic area. The Alberta Stroke Program Early Computed Tomographic Score (ASPECTS) [2] is a quantitative and clinically validated method to measure the extent of ischemic signs on brain CT scans. Scoring early ischemic changes on CT scans remains a challenge, particularly for clinicians with minimal experience. Therefore, an automated ASPECTS scoring system that offers objective assessment and decision-making support is necessary.

 Convolutional neural networks (CNNs) have produced state-of-the-art results for image classification and segmentation [3]. These networks are composed of layers that can learn representations of data with multiple levels of abstractions. Deep learning approaches can save time and effort by extracting features by themselves since the features that compose these layers are learned from data and do not need to be designed by a human. CNNs have shown great potential for medical applications such as brain tumor segmentation [4], liver tumor segmentation [5], pancreas segmentation [6] and in computer-aided diagnostic applications [7]. Based on the outstanding performance of the deep learning approach, CNNs are ideal for analyzing CT data.

 More recently, CNN deep learning techniques have been applied to lesion segmentation of acute ischemic stroke with diffusion-weighted imaging (DWI) [8]. In the previous study two CNN networks were combined to develop their model. The first CNN network was an ensemble of two DeconvNets (EDD Net), and the second was a multi-scale convolutional label evaluation net (MUSCLE Net) that evaluated the results from the EDD Net to remove potential false positives. In recent years, evidences have been accumulating that automated ASPECTS methods based on machine learning are comparable to expert readings of ASPECTS [9, 10, 11].

 In this study, we develop a system for automatically assessing ASPECTS based on deep learning by utilizing a binary classification with a 3DCNN on CT data. The proposed system uses a three-dimensional CNN (3DCNN) [12] to extract information and predict the ASPECTS of CT images. The results showed 70% accuracy in quantifying ASPECTS.

 The remainder of the paper is organized as the follows. In the second section, we present our proposed approach for the automatic assessment. Comprehensive experiments on a CT dataset are used to validate the effectiveness of our system in Section 3. Finally, in section 4, the conclusion and future works are presented.

2. THE PROPOSED METHOD

 In this section, we present automatic assessment of ASPECTS using deep learning and data augmentation. ASPECTS is a 10-point scoring system for measuring the early ischemic changes in patients with anterior circulation stroke, where “10” is given to a patient with the least degree of cerebral ischemia and “1” to the patient with the highest degree of cerebral ischemia [2]. ASPECTS were divided into four groups; Groups 1 through 4 consists of patients with scores of 1–3, 4–6, 7– 9, and a score of 10, respectively. In our research, because the collected number of patients with ASPECTS of ≤ 3 or equal to 10 is limited, we focus on two groups with scores of 4–6 and 7–9. Fig. 1 shows an example of CT images from the two groups.

MTMDCW_2019_v22n5_573_f0002.png 이미지

MTMDCW_2019_v22n5_573_f0002.png 이미지

Fig. 1. (a) Group 4–6, (b) Group 7–9.

 The CT data varied in terms of spatial resolution and the number of slices. The number of slices ranges from 18 to 39. Since original CT data contain a lot of information, it would be very useful if we could exclude the uninformative slices and use only the slices that are informative for ASPECTS. Only the CT slices containing an area with a middle cerebral artery and its major branch, which are informative for ASPECTS, are included, and the cranial and caudal sections of the brain that are non-informative for ASPECTS are removed. Fig. 2 shows examples of informative and non-informative slices for assessment of ASPECTS. After down- or up-sampling the remaining data, each CT sample consist of 17 slices, each slice has size of "80 × 80 × 3", therefore CT sample have final resolution of "17 × 80 × 80 × 3".

MTMDCW_2019_v22n5_573_f0003.png 이미지

MTMDCW_2019_v22n5_573_f0003.png 이미지

Fig. 2. (a) Informative slice, and (b)–(c) Non-informative slices.

 Due to the drastic difference in image contrast between skull and soft tissue, that is brain, in head CT images, the contrast of brain is not optimally set. A contrast limited adaptive histogram equalization (CLAHE) [13] is used in this study to enhance the contrast within the soft tissues of the brain CT images. Fig. 3 shows an example of a CT image after CLAHE is applied.

MTMDCW_2019_v22n5_573_f0004.png 이미지

MTMDCW_2019_v22n5_573_f0004.png 이미지

Fig. 3. (a) Original Image, (b) Image after CLAHE is applied.

 Each CT sample has a final resolution of 17 × 80 × 80 ×3, which is as a sequence of images. An augmentation step is applied to each slice of the CT sample. The data used for training is augmented by a three-degree rotation to the left and right as shown in Fig. 4.

MTMDCW_2019_v22n5_573_f0005.png 이미지

MTMDCW_2019_v22n5_573_f0005.png 이미지

Fig. 4. (a) Preprocessed Image, (b) Rotation Right, (c) Rotation Left.

 In this study, we use a 3DCNN for classification of ASPECTS. The model consists of four blocks of convolution and pooling layers with two fully connected layers attached. These are followed by two dropout layers. The first block and the second block have one convolution layer with kernel size of 3 × 3 × 3. The third block and fourth block have two convolution layers with kernel size of 3 × 3 × 3 and 2 × 2 × 2, respectively. The number of feature maps in each convolution block is 32, 64, 128, 256, respectively.

 We train 3DCNN after the preprocessing and augmentation steps. The input of the model is a CT images with size of 17 × 80 × 80 × 3 and the output of the model is one of two groups with scores of 4–6 and 7–9 in which a CT images belong to. Fig. 5 shows the process of the proposed method. Our model uses Adam optimizer with a constant learning rate of 10-6 and a batch size of 16. 

MTMDCW_2019_v22n5_573_f0001.png 이미지

MTMDCW_2019_v22n5_573_f0001.png 이미지

Fig. 5. Block diagram of the proposed method.

3. EXPERIMENTAL RESULTS

 267 brain CT datasets corresponding to the same number of patients were collected from Chonnam National University Hospital. It consists of 95 and 172 datasets for two groups with scores of 4–6 and 7–9, respectively. The datasets are divided into training (75%) and testing (25%). After augmentation, the total number of the training dataset is 597. Table 1 provides details on the amount of data utilized in each group. The results are evaluated by accuracy and area under the curve (AUC) from a receiver operating characteristic curve (ROC).

Table 1. Datasets for both training and testing

MTMDCW_2019_v22n5_573_t0001.png 이미지

MTMDCW_2019_v22n5_573_t0001.png 이미지

 We evaluated the effects of preprocessing and augmentation on the performance of our model. The performance was improved when using slice filtering and CLAHE for preprocessing. The proposed model obtains better generalization by applying data augmentation to acquire much variation of data. Table 2 shows the effect of preprocessing and augmentation on our 3DCNN model.

Table 2. Preprocessing and Data Augmentation

MTMDCW_2019_v22n5_573_t0002.png 이미지

MTMDCW_2019_v22n5_573_t0002.png 이미지

 We also evaluated the proposed model with other pre-trained models such as VGG16 [14], InceptionV3 [15]. During the training, the weight of pre-trained models was frozen, only the weight of either long short term memory (LSTM) layer or multi-layer perceptron (MLP) was adjusted. The lower accuracy of pre-train models shows that the pre-train models failed to capture the appropriate features of CT images. We also compared the proposed model to the long-term recurrent convolutional network (LRCN) model [16], which was optimized for fully sequential data. The LRCN failed to capture the correct features of 3D data, resulting in a lower performance compared to the 3DCNN models. The results of testing are shown in Table 3.

Table 3. Performances of various CNN structures

MTMDCW_2019_v22n5_573_t0003.png 이미지

MTMDCW_2019_v22n5_573_t0003.png 이미지

4. CONCLUSION

 This study classified CT images into two categories of ASPECT by applying a deep learning neural network, 3DCNN. We proposed a model for automatic assessment of ASPECTS by utilizing binary classification with a 3DCNN using CT data and data augmentation. We applied slice filtering to filter out non-informative slices, and the quality of CT images was enhanced using CLAHE. The augmentation was also applied to improve the training efficiency. Our goal is to extend this method to improve the accuracy of automatic ASPECTS assessment system with a combination of improved preprocessing steps and fusion model. In the future study, we will extend our method to other areas of brain imaging modalities.

References

  1. R. Feng, M. Badgeley, J.D. Mocco, and E.K. Oermann, "Deep Learning Guided Stroke Management: A Review of Clinical Applications," Journal of NeuroInterventional Surgery, Vol. 10, No. 4, pp. 358-362, 2018. https://doi.org/10.1136/neurintsurg-2017-013355
  2. J.H. Pexman, P.A. Barber, M.D. Hill, R.J. Sevick, A.M. Demchuk, M.E. Hudon, et al., "Use of the Alberta Stroke Program Early CT Score (ASPECTS) for Assessing CT Scans in Patients with Acute Stroke," American Journal of Neuroradiology, Vol. 22, No. 8, pp. 1534-1542, 2001.
  3. S.W. Park and D.Y. Kim, "Comparison of Image Classification Performance in Convolutional Neural Network according to Transfer Learning," Journal of Korea Multimedia Society, Vol. 21, No. 12, pp. 1387-1395, 2018. https://doi.org/10.9717/KMMS.2018.21.12.1387
  4. M. Havaei, A. Davy, D. Warde-Farley, A. Biard, A. Courville, Y. Bengio, et al., "Brain Tumor Segmentation with Deep Neural Networks," Medical Image Analysis, Vol. 35, No. 1, pp. 18-31, 2017. https://doi.org/10.1016/j.media.2016.05.004
  5. W. Li, F. Jia, and Q. Hu, "Automatic Segmentation of Liver Tumor in CT Images with Deep Convolutional Neural Networks," Journal of Computer and Communications, Vol. 3, No. 11, pp. 146-151, 2015. https://doi.org/10.4236/jcc.2015.311023
  6. H.R. Roth, A. Farag, L. Lu, E.B. Turkbey, and R.M. Summers, "Deep Convolutional Networks for Pancreas Segmentation in CT Imaging," Proceedings of Society of Photographic Instrumentation Engineers 9413, Medical Imaging 2015: Image Processing, pp. 94131G, 2015.
  7. H.C. Shin, H.R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, et al., "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning," IEEE Transactions on Medical Imaging, Vol. 35, No. 5, pp. 1285-1298, 2016. https://doi.org/10.1109/TMI.2016.2528162
  8. L. Chen, P. Bentley, and D. Rueckert, "Fully Automatic Acute Ischemic Lesion Segmentation in DWI Using Convolutional Neural Networks," NeuroImage: Clinical, Vol. 15, No. 1, pp. 633-643, 2017. https://doi.org/10.1016/j.nicl.2017.06.016
  9. C. Herweh, P.A. Ringleb, G. Rauch, S. Gerry, L. Behrens, M. Mohlenbruch, et al., "Performance of e-ASPECTS Software in Comparison to that of Stroke Physicians on Assessing CT Scans of Acute Ischemic Stroke Patients," International Journal of Stroke, Vol. 11, No. 4, pp. 438-445, 2016. https://doi.org/10.1177/1747493016632244
  10. S. Nagel, D. Sinha, D. Day, W. Reith, R. Chapot, P. Papanagiotou, et al., "e-ASPECTS Software is Non-inferior to Neuroradiologists in Applying the ASPECT Score to Computed Tomography Scans of Acute Ischemic Stroke Patients," International Journal of Stroke, Vol. 12, No. 6, pp. 615-622, 2017. https://doi.org/10.1177/1747493016681020
  11. J. Hampton-Till, M. Harrison, A.L. Kuhn, O. Anderson, D. Sinha, S. Tysoe, et al., "Automated Quantification of Stroke Damage on Brain Computed Tomography Scans: e- ASPECTS," European Medical Journal Neurology, Vol. 3, No. 1, pp. 69-74, 2015.
  12. D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri, "Learning Spatiotemporal Features with 3D Convolutional Networks," Proceedings of the 2015 IEEE International Conference on Computer Vision, pp. 4489-4497, 2015.
  13. S.M. Pizer, E.P. Amburn, J.D. Austin, R. Cromartie, A. Geselowitz, T. Greer, et al., "Adaptive Histogram Equalization and Its Variations," Computer Vision, Graphics, and Image Processing, Vol. 39, No. 3, pp. 355-368, 1987. https://doi.org/10.1016/S0734-189X(87)80186-X
  14. K. Simonyan and A. Zisserman, Very Deep Convolutional Networks for Large-Scale Image Recognition, https://arxiv.org/abs/1409.1556 (accessed Jan., 10, 2019).
  15. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the Inception Architecture for Computer Vision," Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818-2826, 2016.
  16. J. Donahue, L.A. Hendricks, M. Rohrbach, S. Venugopalan, S. Guadarrama, K. Saenko, et al., "Long-term Recurrent Convolutional Networks for Visual Recognition and Description," Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2625-2634, 2015.