• Title/Summary/Keyword: grad-CAM

Search Result 38, Processing Time 0.023 seconds

Empirical Analysis of a Fine-Tuned Deep Convolutional Model in Classifying and Detecting Malaria Parasites from Blood Smears

  • Montalbo, Francis Jesmar P.;Alon, Alvin S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.1
    • /
    • pp.147-165
    • /
    • 2021
  • In this work, we empirically evaluated the efficiency of the recent EfficientNetB0 model to identify and diagnose malaria parasite infections in blood smears. The dataset used was collected and classified by relevant experts from the Lister Hill National Centre for Biomedical Communications (LHNCBC). We prepared our samples with minimal image transformations as opposed to others, as we focused more on the feature extraction capability of the EfficientNetB0 baseline model. We applied transfer learning to increase the initial feature sets and reduced the training time to train our model. We then fine-tuned it to work with our proposed layers and re-trained the entire model to learn from our prepared dataset. The highest overall accuracy attained from our evaluated results was 94.70% from fifty epochs and followed by 94.68% within just ten. Additional visualization and analysis using the Gradient-weighted Class Activation Mapping (Grad-CAM) algorithm visualized how effectively our fine-tuned EfficientNetB0 detected infections better than other recent state-of-the-art DCNN models. This study, therefore, concludes that when fine-tuned, the recent EfficientNetB0 will generate highly accurate deep learning solutions for the identification of malaria parasites in blood smears without the need for stringent pre-processing, optimization, or data augmentation of images.

Face Emotion Recognition using ResNet with Identity-CBAM (Identity-CBAM ResNet 기반 얼굴 감정 식별 모듈)

  • Oh, Gyutea;Kim, Inki;Kim, Beomjun;Gwak, Jeonghwan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.559-561
    • /
    • 2022
  • 인공지능 시대에 들어서면서 개인 맞춤형 환경을 제공하기 위하여 사람의 감정을 인식하고 교감하는 기술이 많이 발전되고 있다. 사람의 감정을 인식하는 방법으로는 얼굴, 음성, 신체 동작, 생체 신호 등이 있지만 이 중 가장 직관적이면서도 쉽게 접할 수 있는 것은 표정이다. 따라서, 본 논문에서는 정확도 높은 얼굴 감정 식별을 위해서 Convolution Block Attention Module(CBAM)의 각 Gate와 Residual Block, Skip Connection을 이용한 Identity- CBAM Module을 제안한다. CBAM의 각 Gate와 Residual Block을 이용하여 각각의 표정에 대한 핵심 특징 정보들을 강조하여 Context 한 모델로 변화시켜주는 효과를 가지게 하였으며 Skip-Connection을 이용하여 기울기 소실 및 폭발에 강인하게 해주는 모듈을 제안한다. AI-HUB의 한국인 감정 인식을 위한 복합 영상 데이터 세트를 이용하여 총 6개의 클래스로 구분하였으며, F1-Score, Accuracy 기준으로 Identity-CBAM 모듈을 적용하였을 때 Vanilla ResNet50, ResNet101 대비 F1-Score 0.4~2.7%, Accuracy 0.18~2.03%의 성능 향상을 달성하였다. 또한, Guided Backpropagation과 Guided GradCam을 통해 시각화하였을 때 중요 특징점들을 더 세밀하게 표현하는 것을 확인하였다. 결과적으로 이미지 내 표정 분류 Task에서 Vanilla ResNet50, ResNet101을 사용하는 것보다 Identity-CBAM Module을 함께 사용하는 것이 더 적합함을 입증하였다.

Deep learning classification of transient noises using LIGOs auxiliary channel data

  • Oh, SangHoon;Kim, Whansun;Son, Edwin J.;Kim, Young-Min
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.2
    • /
    • pp.74.2-75
    • /
    • 2021
  • We demonstrate that a deep learning classifier that only uses to gravitational wave (GW) detectors auxiliary channel data can distinguish various types of non-Gaussian noise transients (glitches) with significant accuracy, i.e., ≳ 80%. The classifier is implemented using the multi-scale neural networks (MSNN) with PyTorch. The glitches appearing in the GW strain data have been one of the main obstacles that degrade the sensitivity of the gravitational detectors, consequently hindering the detection and parameterization of the GW signals. Numerous efforts have been devoted to tracking down their origins and to mitigating them. However, there remain many glitches of which origins are not unveiled. We apply the MSNN classifier to the auxiliary channel data corresponding to publicly available GravitySpy glitch samples of LIGO O1 run without using GW strain data. Investigation of the auxiliary channel data of the segments that coincide to the glitches in the GW strain channel is particularly useful for finding the noise sources, because they record physical and environmental conditions and the status of each part of the detector. By only using the auxiliary channel data, this classifier can provide us with the independent view on the data quality and potentially gives us hints to the origins of the glitches, when using the explainable AI technique such as Layer-wise Relevance Propagation or GradCAM.

  • PDF

Development and Validation of Spine Classification Model for Sarcopenia Diagnosis and Validation (근감소증 진단을 위한 척추 분류 모델 개발 및 검증)

  • Chung-sub Lee;Dong-Wook Lim;Si-Hyeong Noh;Chul Park;Chang-Won Jeong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.475-478
    • /
    • 2023
  • 컴퓨터 단층촬영(CT)을 활용한 골격근 단면적은 근감소증과 관련된 기능을 평가하는 데 사용된다. 일반적인 근감소증 연구는 요추 3번의 골격근량을 주로 보지만 암 또는 폐절제술과의 상관관계를 예측하기 위한 다양한 연구에서는 흉추 4번, 7번, 8번, 10번, 12번 다양한 수준의 골격근량으로 연구를 진행하고 있음을 알 수 있다. 본 논문에서는 흉부와 복부 CT 영상에서 근감소증 진단을 위해서 흉추와 요추의 영역별 슬라이스를 검출하기 위해서 CNN 구조의 EfficientNetV2를 전이학습하여 인공지능 모듈을 개발하였다. 인공지능 모듈은 전체 흉부 및 복부 CT 영상에서 Cervical, T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, L1, L2, L3, L4, L5, Sacral 총 19 클래스를 검출하도록 하였다. Test 데이터셋을 사용하여 Confusion Matrix와 Grad-CAM으로 모델의 정확도를 시각화하여 보였으며 검증으로 인공지능 모듈의 정확성을 측정하였다. 끝으로 우리가 개발한 다기관 공동연구 지원플랫폼에 적용하여 시각화된 결과를 보였다.

Estimation of Heading Date of Paddy Rice from Slanted View Images Using Deep Learning Classification Model

  • Hyeokjin Bak;Hoyoung Ban;SeongryulChang;Dongwon Gwon;Jae-Kyeong Baek;Jeong-Il Cho;Wan-Gyu Sang
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2022.10a
    • /
    • pp.80-80
    • /
    • 2022
  • Estimation of heading date of paddy rice is laborious and time consuming. Therefore, automatic estimation of heading date of paddy rice is highly essential. In this experiment, deep learning classification models were used to classify two difference categories of rice (vegetative and reproductive stage) based on the panicle initiation of paddy field. Specifically, the dataset includes 444 slanted view images belonging to two categories and was then expanded to include 1,497 images via IMGAUG data augmentation technique. We adopt two transfer learning strategies: (First, used transferring model weights already trained on ImageNet to six classification network models: VGGNet, ResNet, DenseNet, InceptionV3, Xception and MobileNet, Second, fine-tuned some layers of the network according to our dataset). After training the CNN model, we used several evaluation metrics commonly used for classification tasks, including Accuracy, Precision, Recall, and F1-score. In addition, GradCAM was used to generate visual explanations for each image patch. Experimental results showed that the InceptionV3 is the best performing model in terms of the accuracy, average recall, precision, and F1-score. The fine-tuned InceptionV3 model achieved an overall classification accuracy of 0.95 with a high F1-score of 0.95. Our CNN model also represented the change of rice heading date under different date of transplanting. This study demonstrated that image based deep learning model can reliably be used as an automatic monitoring system to detect the heading date of rice crops using CCTV camera.

  • PDF

Multi-classification of Osteoporosis Grading Stages Using Abdominal Computed Tomography with Clinical Variables : Application of Deep Learning with a Convolutional Neural Network (멀티 모달리티 데이터 활용을 통한 골다공증 단계 다중 분류 시스템 개발: 합성곱 신경망 기반의 딥러닝 적용)

  • Tae Jun Ha;Hee Sang Kim;Seong Uk Kang;DooHee Lee;Woo Jin Kim;Ki Won Moon;Hyun-Soo Choi;Jeong Hyun Kim;Yoon Kim;So Hyeon Bak;Sang Won Park
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.3
    • /
    • pp.187-201
    • /
    • 2024
  • Osteoporosis is a major health issue globally, often remaining undetected until a fracture occurs. To facilitate early detection, deep learning (DL) models were developed to classify osteoporosis using abdominal computed tomography (CT) scans. This study was conducted using retrospectively collected data from 3,012 contrast-enhanced abdominal CT scans. The DL models developed in this study were constructed for using image data, demographic/clinical information, and multi-modality data, respectively. Patients were categorized into the normal, osteopenia, and osteoporosis groups based on their T-scores, obtained from dual-energy X-ray absorptiometry, into normal, osteopenia, and osteoporosis groups. The models showed high accuracy and effectiveness, with the combined data model performing the best, achieving an area under the receiver operating characteristic curve of 0.94 and an accuracy of 0.80. The image-based model also performed well, while the demographic data model had lower accuracy and effectiveness. In addition, the DL model was interpreted by gradient-weighted class activation mapping (Grad-CAM) to highlight clinically relevant features in the images, revealing the femoral neck as a common site for fractures. The study shows that DL can accurately identify osteoporosis stages from clinical data, indicating the potential of abdominal CT scans in early osteoporosis detection and reducing fracture risks with prompt treatment.

Binary classification of bolts with anti-loosening coating using transfer learning-based CNN (전이학습 기반 CNN을 통한 풀림 방지 코팅 볼트 이진 분류에 관한 연구)

  • Noh, Eunsol;Yi, Sarang;Hong, Seokmoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.651-658
    • /
    • 2021
  • Because bolts with anti-loosening coatings are used mainly for joining safety-related components in automobiles, accurate automatic screening of these coatings is essential to detect defects efficiently. The performance of the convolutional neural network (CNN) used in a previous study [Identification of bolt coating defects using CNN and Grad-CAM] increased with increasing number of data for the analysis of image patterns and characteristics. On the other hand, obtaining the necessary amount of data for coated bolts is difficult, making training time-consuming. In this paper, resorting to the same VGG16 model as in a previous study, transfer learning was applied to decrease the training time and achieve the same or better accuracy with fewer data. The classifier was trained, considering the number of training data for this study and its similarity with ImageNet data. In conjunction with the fully connected layer, the highest accuracy was achieved (95%). To enhance the performance further, the last convolution layer and the classifier were fine-tuned, which resulted in a 2% increase in accuracy (97%). This shows that the learning time can be reduced by transfer learning and fine-tuning while maintaining a high screening accuracy.

Diagnosis and Visualization of Intracranial Hemorrhage on Computed Tomography Images Using EfficientNet-based Model (전산화 단층 촬영(Computed tomography, CT) 이미지에 대한 EfficientNet 기반 두개내출혈 진단 및 가시화 모델 개발)

  • Youn, Yebin;Kim, Mingeon;Kim, Jiho;Kang, Bongkeun;Kim, Ghootae
    • Journal of Biomedical Engineering Research
    • /
    • v.42 no.4
    • /
    • pp.150-158
    • /
    • 2021
  • Intracranial hemorrhage (ICH) refers to acute bleeding inside the intracranial vault. Not only does this devastating disease record a very high mortality rate, but it can also cause serious chronic impairment of sensory, motor, and cognitive functions. Therefore, a prompt and professional diagnosis of the disease is highly critical. Noninvasive brain imaging data are essential for clinicians to efficiently diagnose the locus of brain lesion, volume of bleeding, and subsequent cortical damage, and to take clinical interventions. In particular, computed tomography (CT) images are used most often for the diagnosis of ICH. In order to diagnose ICH through CT images, not only medical specialists with a sufficient number of diagnosis experiences are required, but even when this condition is met, there are many cases where bleeding cannot be successfully detected due to factors such as low signal ratio and artifacts of the image itself. In addition, discrepancies between interpretations or even misinterpretations might exist causing critical clinical consequences. To resolve these clinical problems, we developed a diagnostic model predicting intracranial bleeding and its subtypes (intraparenchymal, intraventricular, subarachnoid, subdural, and epidural) by applying deep learning algorithms to CT images. We also constructed a visualization tool highlighting important regions in a CT image for predicting ICH. Specifically, 1) 27,758 CT brain images from RSNA were pre-processed to minimize the computational load. 2) Three different CNN-based models (ResNet, EfficientNet-B2, and EfficientNet-B7) were trained based on a training image data set. 3) Diagnosis performance of each of the three models was evaluated based on an independent test image data set: As a result of the model comparison, EfficientNet-B7's performance (classification accuracy = 91%) was a way greater than the other models. 4) Finally, based on the result of EfficientNet-B7, we visualized the lesions of internal bleeding using the Grad-CAM. Our research suggests that artificial intelligence-based diagnostic systems can help diagnose and treat brain diseases resolving various problems in clinical situations.