• Title/Summary/Keyword: grad-CAM

Search Result 39, Processing Time 0.021 seconds

Efficient Osteoporosis Prediction Using A Pair of Ensemble Models

  • Choi, Se-Heon;Hwang, Dong-Hwan;Kim, Do-Hyeon;Bak, So-Hyeon;Kim, Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.12
    • /
    • pp.45-52
    • /
    • 2021
  • In this paper, we propose a prediction model for osteopenia and osteoporosis based on a convolutional neural network(CNN) using computed tomography(CT) images. In a single CT image, CNN had a limitation in utilizing important local features for diagnosis. So we propose a compound model which has two identical structures. As an input, two different texture images are used, which are converted from a single normalized CT image. The two networks train different information by using dissimilarity loss function. As a result, our model trains various features in a single CT image which includes important local features, then we ensemble them to improve the accuracy of predicting osteopenia and osteoporosis. In experiment results, our method shows an accuracy of 77.11% and the feature visualize of this model is confirmed by using Grad-CAM.

Blurring of Swear Words in Negative Comments through Convolutional Neural Network (컨볼루션 신경망 모델에 의한 악성 댓글 모자이크처리 방안)

  • Kim, Yumin;Kang, Hyobin;Han, Suhyun;Jeong, Hieyong
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.27 no.2
    • /
    • pp.25-34
    • /
    • 2022
  • With the development of online services, the ripple effect of negative comments is increasing, and the damage of cyber violence is rising. Various methods such as filtering based on forbidden words and reporting systems prevent this, but it is challenging to eradicate negative comments. Therefore, this study aimed to increase the accuracy of the classification of negative comments using deep learning and blur the parts corresponding to profanity. Two different conditional training helped decide the number of deep learning layers and filters. The accuracy of 88% confirmed with 90% of the dataset for training and 10% for tests. In addition, Grad-CAM enabled us to find and blur the location of swear words in negative comments. Although the accuracy of classifying comments based on simple forbidden words was 56%, it was found that blurring negative comments through the deep learning model was more effective.

Utilizing Mean Teacher Semi-Supervised Learning for Robust Pothole Image Classification

  • Inki Kim;Beomjun Kim;Jeonghwan Gwak
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.5
    • /
    • pp.17-28
    • /
    • 2023
  • Potholes that occur on paved roads can have fatal consequences for vehicles traveling at high speeds and may even lead to fatalities. While manual detection of potholes using human labor is commonly used to prevent pothole-related accidents, it is economically and temporally inefficient due to the exposure of workers on the road and the difficulty in predicting potholes in certain categories. Therefore, completely preventing potholes is nearly impossible, and even preventing their formation is limited due to the influence of ground conditions closely related to road environments. Additionally, labeling work guided by experts is required for dataset construction. Thus, in this paper, we utilized the Mean Teacher technique, one of the semi-supervised learning-based knowledge distillation methods, to achieve robust performance in pothole image classification even with limited labeled data. We demonstrated this using performance metrics and GradCAM, showing that when using semi-supervised learning, 15 pre-trained CNN models achieved an average accuracy of 90.41%, with a minimum of 2% and a maximum of 9% performance difference compared to supervised learning.

Fabrication of Micro-reactor by 3D Printing Machine (3D 프린터를 이용한 마이크로 리액터 가공에 관한 연구)

  • Choi, Hae Woon;Yoon, Sung Chul;Ma, Jae Kwon;Bang, Dae Wook
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.23 no.3
    • /
    • pp.218-222
    • /
    • 2014
  • A 3D printer was used to fabricate a micro-TAS system for biomedical applications. A polymeric medical device fabrication based on a 3D printer can be performed at atmospheric conditions. A CAD- and CAM-based system is a flexible method to design medical components, and a 3D printer is a suitable device to perform this task. In this research, a 100-micron-wide fluidic channel was fabricated with a high-aspect ratio. A cross-sectional SEM image confirmed its possible usage in a micro-reactor using 3D printers. CNC-machined samples were compared to 3D printer-fabricated samples, and the advantages and disadvantages were discussed. Based on the SEM images, the surface roughness of the 3D printed reactor was not affected by wet or dry conditions due to its manufacturing principle. An aspect ratio of 5 to 1 was achievable with 100-${\mu}$ m-wide fluid channels. No melting was found, and the shape of channels was straight enough to be used for micro reactors.

Sketch Classification using Unsigned Distance Field (Unsigned Distance Field를 이용한 Sketch Classification)

  • Kim, Min Woo;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.110-112
    • /
    • 2021
  • 본 논문에서는 스케치를 unsigned distance field로 변환하여 스케치 클래스 분류 네트워크의 입력으로 사용한다. 그리고 unsigned distance field scaling factor를 제안하여, unsigned distance field에 보존되는 스케치의 전역적인 정보와 국소적인 정보 사이에 상호조정이 가능하게 하였다. 다양한 scaling factor 값에 대해서 실험을 진행하여, 기존 unsigned distance field보다 국소적인 정보가 더 포함되어 있을 때 클래스 분류 성능이 향상되는 것을 확인하였다. 또한 스케치를 고밀도 데이터로 변환하여 사용했을 때 학습이 더 안정적으로 되고, 네트워크가 더욱 합리적인 근거로 스케치를 올바른 클래스로 분류한다는 것을 Smooth Grad-CAM++를 통해서 확인하였다.

  • PDF

Classification of Whole Body Bone Scan Image with Bone Metastasis using CNN-based Transfer Learning (CNN 기반 전이학습을 이용한 뼈 전이가 존재하는 뼈 스캔 영상 분류)

  • Yim, Ji Yeong;Do, Thanh Cong;Kim, Soo Hyung;Lee, Guee Sang;Lee, Min Hee;Min, Jung Joon;Bom, Hee Seung;Kim, Hyeon Sik;Kang, Sae Ryung;Yang, Hyung Jeong
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.8
    • /
    • pp.1224-1232
    • /
    • 2022
  • Whole body bone scan is the most frequently performed nuclear medicine imaging to evaluate bone metastasis in cancer patients. We evaluated the performance of a VGG16-based transfer learning classifier for bone scan images in which metastatic bone lesion was present. A total of 1,000 bone scans in 1,000 cancer patients (500 patients with bone metastasis, 500 patients without bone metastasis) were evaluated. Bone scans were labeled with abnormal/normal for bone metastasis using medical reports and image review. Subsequently, gradient-weighted class activation maps (Grad-CAMs) were generated for explainable AI. The proposed model showed AUROC 0.96 and F1-Score 0.90, indicating that it outperforms to VGG16, ResNet50, Xception, DenseNet121 and InceptionV3. Grad-CAM visualized that the proposed model focuses on hot uptakes, which are indicating active bone lesions, for classification of whole body bone scan images with bone metastases.

Estimation of Rice Heading Date of Paddy Rice from Slanted and Top-view Images Using Deep Learning Classification Model (딥 러닝 분류 모델을 이용한 직하방과 경사각 영상 기반의 벼 출수기 판별)

  • Hyeok-jin Bak;Wan-Gyu Sang;Sungyul Chang;Dongwon Kwon;Woo-jin Im;Ji-hyeon Lee;Nam-jin Chung;Jung-Il Cho
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.337-345
    • /
    • 2023
  • Estimating the rice heading date is one of the most crucial agricultural tasks related to productivity. However, due to abnormal climates around the world, it is becoming increasingly challenging to estimate the rice heading date. Therefore, a more objective classification method for estimating the rice heading date is needed than the existing methods. This study, we aimed to classify the rice heading stage from various images using a CNN classification model. We collected top-view images taken from a drone and a phenotyping tower, as well as slanted-view images captured with a RGB camera. The collected images underwent preprocessing to prepare them as input data for the CNN model. The CNN architectures employed were ResNet50, InceptionV3, and VGG19, which are commonly used in image classification models. The accuracy of the models all showed an accuracy of 0.98 or higher regardless of each architecture and type of image. We also used Grad-CAM to visually check which features of the image the model looked at and classified. Then verified our model accurately measure the rice heading date in paddy fields. The rice heading date was estimated to be approximately one day apart on average in the four paddy fields. This method suggests that the water head can be estimated automatically and quantitatively when estimating the rice heading date from various paddy field monitoring images.

A COVID-19 Chest X-ray Reading Technique based on Deep Learning (딥 러닝 기반 코로나19 흉부 X선 판독 기법)

  • Ann, Kyung-Hee;Ohm, Seong-Yong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.789-795
    • /
    • 2020
  • Many deaths have been reported due to the worldwide pandemic of COVID-19. In order to prevent the further spread of COVID-19, it is necessary to quickly and accurately read images of suspected patients and take appropriate measures. To this end, this paper introduces a deep learning-based COVID-19 chest X-ray reading technique that can assist in image reading by providing medical staff whether a patient is infected. First of all, in order to learn the reading model, a sufficient dataset must be secured, but the currently provided COVID-19 open dataset does not have enough image data to ensure the accuracy of learning. Therefore, we solved the image data number imbalance problem that degrades AI learning performance by using a Stacked Generative Adversarial Network(StackGAN++). Next, the DenseNet-based classification model was trained using the augmented data set to develop the reading model. This classification model is a model for binary classification of normal chest X-ray and COVID-19 chest X-ray, and the performance of the model was evaluated using part of the actual image data as test data. Finally, the reliability of the model was secured by presenting the basis for judging the presence or absence of disease in the input image using Grad-CAM, one of the explainable artificial intelligence called XAI.

Accuracy of one-step automated orthodontic diagnosis model using a convolutional neural network and lateral cephalogram images with different qualities obtained from nationwide multi-hospitals

  • Yim, Sunjin;Kim, Sungchul;Kim, Inhwan;Park, Jae-Woo;Cho, Jin-Hyoung;Hong, Mihee;Kang, Kyung-Hwa;Kim, Minji;Kim, Su-Jung;Kim, Yoon-Ji;Kim, Young Ho;Lim, Sung-Hoon;Sung, Sang Jin;Kim, Namkug;Baek, Seung-Hak
    • The korean journal of orthodontics
    • /
    • v.52 no.1
    • /
    • pp.3-19
    • /
    • 2022
  • Objective: The purpose of this study was to investigate the accuracy of one-step automated orthodontic diagnosis of skeletodental discrepancies using a convolutional neural network (CNN) and lateral cephalogram images with different qualities from nationwide multi-hospitals. Methods: Among 2,174 lateral cephalograms, 1,993 cephalograms from two hospitals were used for training and internal test sets and 181 cephalograms from eight other hospitals were used for an external test set. They were divided into three classification groups according to anteroposterior skeletal discrepancies (Class I, II, and III), vertical skeletal discrepancies (normodivergent, hypodivergent, and hyperdivergent patterns), and vertical dental discrepancies (normal overbite, deep bite, and open bite) as a gold standard. Pre-trained DenseNet-169 was used as a CNN classifier model. Diagnostic performance was evaluated by receiver operating characteristic (ROC) analysis, t-stochastic neighbor embedding (t-SNE), and gradient-weighted class activation mapping (Grad-CAM). Results: In the ROC analysis, the mean area under the curve and the mean accuracy of all classifications were high with both internal and external test sets (all, > 0.89 and > 0.80). In the t-SNE analysis, our model succeeded in creating good separation between three classification groups. Grad-CAM figures showed differences in the location and size of the focus areas between three classification groups in each diagnosis. Conclusions: Since the accuracy of our model was validated with both internal and external test sets, it shows the possible usefulness of a one-step automated orthodontic diagnosis tool using a CNN model. However, it still needs technical improvement in terms of classifying vertical dental discrepancies.

Chest CT Image Patch-Based CNN Classification and Visualization for Predicting Recurrence of Non-Small Cell Lung Cancer Patients (비소세포폐암 환자의 재발 예측을 위한 흉부 CT 영상 패치 기반 CNN 분류 및 시각화)

  • Ma, Serie;Ahn, Gahee;Hong, Helen
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.1
    • /
    • pp.1-9
    • /
    • 2022
  • Non-small cell lung cancer (NSCLC) accounts for a high proportion of 85% among all lung cancer and has a significantly higher mortality rate (22.7%) compared to other cancers. Therefore, it is very important to predict the prognosis after surgery in patients with non-small cell lung cancer. In this study, the types of preoperative chest CT image patches for non-small cell lung cancer patients with tumor as a region of interest are diversified into five types according to tumor-related information, and performance of single classifier model, ensemble classifier model with soft-voting method, and ensemble classifier model using 3 input channels for combination of three different patches using pre-trained ResNet and EfficientNet CNN networks are analyzed through misclassification cases and Grad-CAM visualization. As a result of the experiment, the ResNet152 single model and the EfficientNet-b7 single model trained on the peritumoral patch showed accuracy of 87.93% and 81.03%, respectively. In addition, ResNet152 ensemble model using the image, peritumoral, and shape-focused intratumoral patches which were placed in each input channels showed stable performance with an accuracy of 87.93%. Also, EfficientNet-b7 ensemble classifier model with soft-voting method using the image and peritumoral patches showed accuracy of 84.48%.