• 제목/요약/키워드: ResNet50

검색결과 123건 처리시간 0.031초

손목 관절 단순 방사선 영상에서 딥 러닝을 이용한 전후방 및 측면 영상 분류와 요골 영역 분할 (Classification of Anteroposterior/Lateral Images and Segmentation of the Radius Using Deep Learning in Wrist X-rays Images)

  • 이기표;김영재;이상림;김광기
    • 대한의용생체공학회:의공학회지
    • /
    • 제41권2호
    • /
    • pp.94-100
    • /
    • 2020
  • The purpose of this study was to present the models for classifying the wrist X-ray images by types and for segmenting the radius automatically in each image using deep learning and to verify the learned models. The data were a total of 904 wrist X-rays with the distal radius fracture, consisting of 472 anteroposterior (AP) and 432 lateral images. The learning model was the ResNet50 model for AP/lateral image classification, and the U-Net model for segmentation of the radius. In the model for AP/lateral image classification, 100.0% was showed in precision, recall, and F1 score and area under curve (AUC) was 1.0. The model for segmentation of the radius showed an accuracy of 99.46%, a sensitivity of 89.68%, a specificity of 99.72%, and a Dice similarity coefficient of 90.05% in AP images and an accuracy of 99.37%, a sensitivity of 88.65%, a specificity of 99.69%, and a Dice similarity coefficient of 86.05% in lateral images. The model for AP/lateral classification and the segmentation model of the radius learned through deep learning showed favorable performances to expect clinical application.

관개용수로 CCTV 이미지를 이용한 CNN 딥러닝 이미지 모델 적용 (Application of CCTV Image and Semantic Segmentation Model for Water Level Estimation of Irrigation Channel)

  • 김귀훈;김마가;윤푸른;방재홍;명우호;최진용;최규훈
    • 한국농공학회논문집
    • /
    • 제64권3호
    • /
    • pp.63-73
    • /
    • 2022
  • A more accurate understanding of the irrigation water supply is necessary for efficient agricultural water management. Although we measure water levels in an irrigation canal using ultrasonic water level gauges, some errors occur due to malfunctions or the surrounding environment. This study aims to apply CNN (Convolutional Neural Network) Deep-learning-based image classification and segmentation models to the irrigation canal's CCTV (Closed-Circuit Television) images. The CCTV images were acquired from the irrigation canal of the agricultural reservoir in Cheorwon-gun, Gangwon-do. We used the ResNet-50 model for the image classification model and the U-Net model for the image segmentation model. Using the Natural Breaks algorithm, we divided water level data into 2, 4, and 8 groups for image classification models. The classification models of 2, 4, and 8 groups showed the accuracy of 1.000, 0.987, and 0.634, respectively. The image segmentation model showed a Dice score of 0.998 and predicted water levels showed R2 of 0.97 and MAE (Mean Absolute Error) of 0.02 m. The image classification models can be applied to the automatic gate-controller at four divisions of water levels. Also, the image segmentation model results can be applied to the alternative measurement for ultrasonic water gauges. We expect that the results of this study can provide a more scientific and efficient approach for agricultural water management.

CNN 기반 전이학습을 이용한 뼈 전이가 존재하는 뼈 스캔 영상 분류 (Classification of Whole Body Bone Scan Image with Bone Metastasis using CNN-based Transfer Learning)

  • 임지영;도탄콩;김수형;이귀상;이민희;민정준;범희승;김현식;강세령;양형정
    • 한국멀티미디어학회논문지
    • /
    • 제25권8호
    • /
    • pp.1224-1232
    • /
    • 2022
  • Whole body bone scan is the most frequently performed nuclear medicine imaging to evaluate bone metastasis in cancer patients. We evaluated the performance of a VGG16-based transfer learning classifier for bone scan images in which metastatic bone lesion was present. A total of 1,000 bone scans in 1,000 cancer patients (500 patients with bone metastasis, 500 patients without bone metastasis) were evaluated. Bone scans were labeled with abnormal/normal for bone metastasis using medical reports and image review. Subsequently, gradient-weighted class activation maps (Grad-CAMs) were generated for explainable AI. The proposed model showed AUROC 0.96 and F1-Score 0.90, indicating that it outperforms to VGG16, ResNet50, Xception, DenseNet121 and InceptionV3. Grad-CAM visualized that the proposed model focuses on hot uptakes, which are indicating active bone lesions, for classification of whole body bone scan images with bone metastases.

MLCNN-COV: A multilabel convolutional neural network-based framework to identify negative COVID medicine responses from the chemical three-dimensional conformer

  • Pranab Das;Dilwar Hussain Mazumder
    • ETRI Journal
    • /
    • 제46권2호
    • /
    • pp.290-306
    • /
    • 2024
  • To treat the novel COronaVIrus Disease (COVID), comparatively fewer medicines have been approved. Due to the global pandemic status of COVID, several medicines are being developed to treat patients. The modern COVID medicines development process has various challenges, including predicting and detecting hazardous COVID medicine responses. Moreover, correctly predicting harmful COVID medicine reactions is essential for health safety. Significant developments in computational models in medicine development can make it possible to identify adverse COVID medicine reactions. Since the beginning of the COVID pandemic, there has been significant demand for developing COVID medicines. Therefore, this paper presents the transferlearning methodology and a multilabel convolutional neural network for COVID (MLCNN-COV) medicines development model to identify negative responses of COVID medicines. For analysis, a framework is proposed with five multilabel transfer-learning models, namely, MobileNetv2, ResNet50, VGG19, DenseNet201, and Inceptionv3, and an MLCNN-COV model is designed with an image augmentation (IA) technique and validated through experiments on the image of three-dimensional chemical conformer of 17 number of COVID medicines. The RGB color channel is utilized to represent the feature of the image, and image features are extracted by employing the Convolution2D and MaxPooling2D layer. The findings of the current MLCNN-COV are promising, and it can identify individual adverse reactions of medicines, with the accuracy ranging from 88.24% to 100%, which outperformed the transfer-learning model's performance. It shows that three-dimensional conformers adequately identify negative COVID medicine responses.

Automatic detection of periodontal compromised teeth in digital panoramic radiographs using faster regional convolutional neural networks

  • Thanathornwong, Bhornsawan;Suebnukarn, Siriwan
    • Imaging Science in Dentistry
    • /
    • 제50권2호
    • /
    • pp.169-174
    • /
    • 2020
  • Purpose: Periodontal disease causes tooth loss and is associated with cardiovascular diseases, diabetes, and rheumatoid arthritis. The present study proposes using a deep learning-based object detection method to identify periodontally compromised teeth on digital panoramic radiographs. A faster regional convolutional neural network (faster R-CNN) which is a state-of-the-art deep detection network, was adapted from the natural image domain using a small annotated clinical data- set. Materials and Methods: In total, 100 digital panoramic radiographs of periodontally compromised patients were retrospectively collected from our hospital's information system and augmented. The periodontally compromised teeth found in each image were annotated by experts in periodontology to obtain the ground truth. The Keras library, which is written in Python, was used to train and test the model on a single NVidia 1080Ti GPU. The faster R-CNN model used a pretrained ResNet architecture. Results: The average precision rate of 0.81 demonstrated that there was a significant region of overlap between the predicted regions and the ground truth. The average recall rate of 0.80 showed that the periodontally compromised teeth regions generated by the detection method excluded healthiest teeth areas. In addition, the model achieved a sensitivity of 0.84, a specificity of 0.88 and an F-measure of 0.81. Conclusion: The faster R-CNN trained on a limited amount of labeled imaging data performed satisfactorily in detecting periodontally compromised teeth. The application of a faster R-CNN to assist in the detection of periodontally compromised teeth may reduce diagnostic effort by saving assessment time and allowing automated screening documentation.

심층 신경망 기반의 생활폐기물 자동 분류 (Object classification for domestic waste based on Convolutional neural networks)

  • 남준영;이혜민;;;;문현준
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송∙미디어공학회 2019년도 추계학술대회
    • /
    • pp.83-86
    • /
    • 2019
  • 도시화 과정에서 도시의 생활폐기물 문제가 빠르게 증가되고 있고, 효과적이지 못한 생활폐기물 관리는 도시의 오염을 악화시키고 물리적인 환경오염과 경제적인 부분에서 극심한 문제들을 야기시킬 수 있다. 게다가 부피가 커서 관리하기 힘든 대형 생활폐기물들이 증가하여 도시 발전에도 방해가 된다. 생활폐기물을 처리하는데 있어 대형 생활폐기물 품목에 대해서는 요금을 청구하여 처리한다. 다양한 유형의 대형 생활폐기물을 수동으로 분류하는 것은 시간과 비용이 많이 든다. 그 결과 대형 생활폐기물을 자동으로 분류하는 시스템을 도입하는 것이 중요하다. 본 논문에서는 대형 생활폐기물 분류를 위한 시스템을 제안하며, 이 논문의 4 가지로 분류된다. 1) 높은 정확도와 강 분류(roust classification) 수행에 적합한 Convolution Neural Network(CNN) 모델 중 VGG-19, Inception-V3, ResNet50 의 정확도와 속도를 비교한다. 제안된 20 개의 클래스의 대형 생활폐기물의 데이터 셋(data set)에 대해 가장 높은 분류의 정확도는 86.19%이다. 2) 불균형 데이터 문제를 처리하기 Class Weight VGG-19(CW-VGG-19)와 Extreme Gradient Boosting VGG-19 두 가지 방법을 사용하였다. 3) 20 개의 클래스를 포함하는 데이터 셋을 수동으로 수집 및 검증하였으며 각 클래스의 컬러 이미지 수는 500 개 이상이다. 4) 딥 러닝(Deep Learning) 기반 모바일 애플리케이션을 개발하였다.

  • PDF

Remote Sensing Image Classification for Land Cover Mapping in Developing Countries: A Novel Deep Learning Approach

  • Lynda, Nzurumike Obianuju;Nnanna, Nwojo Agwu;Boukar, Moussa Mahamat
    • International Journal of Computer Science & Network Security
    • /
    • 제22권2호
    • /
    • pp.214-222
    • /
    • 2022
  • Convolutional Neural networks (CNNs) are a category of deep learning networks that have proven very effective in computer vision tasks such as image classification. Notwithstanding, not much has been seen in its use for remote sensing image classification in developing countries. This is majorly due to the scarcity of training data. Recently, transfer learning technique has successfully been used to develop state-of-the art models for remote sensing (RS) image classification tasks using training and testing data from well-known RS data repositories. However, the ability of such model to classify RS test data from a different dataset has not been sufficiently investigated. In this paper, we propose a deep CNN model that can classify RS test data from a dataset different from the training dataset. To achieve our objective, we first, re-trained a ResNet-50 model using EuroSAT, a large-scale RS dataset to develop a base model then we integrated Augmentation and Ensemble learning to improve its generalization ability. We further experimented on the ability of this model to classify a novel dataset (Nig_Images). The final classification results shows that our model achieves a 96% and 80% accuracy on EuroSAT and Nig_Images test data respectively. Adequate knowledge and usage of this framework is expected to encourage research and the usage of deep CNNs for land cover mapping in cases of lack of training data as obtainable in developing countries.

게이트심장혈액풀검사에서 딥러닝 기반 좌심실 영역 분할방법의 유용성 평가 (Evaluating Usefulness of Deep Learning Based Left Ventricle Segmentation in Cardiac Gated Blood Pool Scan)

  • 오주영;정의환;이주영;박훈희
    • 대한방사선기술학회지:방사선기술과학
    • /
    • 제45권2호
    • /
    • pp.151-158
    • /
    • 2022
  • The Cardiac Gated Blood Pool (GBP) scintigram, a nuclear medicine imaging, calculates the left ventricular Ejection Fraction (EF) by segmenting the left ventricle from the heart. However, in order to accurately segment the substructure of the heart, specialized knowledge of cardiac anatomy is required, and depending on the expert's processing, there may be a problem in which the left ventricular EF is calculated differently. In this study, using the DeepLabV3 architecture, GBP images were trained on 93 training data with a ResNet-50 backbone. Afterwards, the trained model was applied to 23 separate test sets of GBP to evaluate the reproducibility of the region of interest and left ventricular EF. Pixel accuracy, dice coefficient, and IoU for the region of interest were 99.32±0.20, 94.65±1.45, 89.89±2.62(%) at the diastolic phase, and 99.26±0.34, 90.16±4.19, and 82.33±6.69(%) at the systolic phase, respectively. Left ventricular EF was calculated to be an average of 60.37±7.32% in the ROI set by humans and 58.68±7.22% in the ROI set by the deep learning segmentation model. (p<0.05) The automated segmentation method using deep learning presented in this study similarly predicts the average human-set ROI and left ventricular EF when a random GBP image is an input. If the automatic segmentation method is developed and applied to the functional examination method that needs to set ROI in the field of cardiac scintigram in nuclear medicine in the future, it is expected to greatly contribute to improving the efficiency and accuracy of processing and analysis by nuclear medicine specialists.

자궁경부 영상에서의 라디오믹스 기반 판독 불가 영상 분류 알고리즘 연구 (A Radiomics-based Unread Cervical Imaging Classification Algorithm)

  • 김고은;김영재;주웅;남계현;김수녕;김광기
    • 대한의용생체공학회:의공학회지
    • /
    • 제42권5호
    • /
    • pp.241-249
    • /
    • 2021
  • Recently, artificial intelligence for diagnosis system of obstetric diseases have been actively studied. Artificial intelligence diagnostic assist systems, which support medical diagnosis benefits of efficiency and accuracy, may experience problems of poor learning accuracy and reliability when inappropriate images are the model's input data. For this reason, before learning, We proposed an algorithm to exclude unread cervical imaging. 2,000 images of read cervical imaging and 257 images of unread cervical imaging were used for this study. Experiments were conducted based on the statistical method Radiomics to extract feature values of the entire images for classification of unread images from the entire images and to obtain a range of read threshold values. The degree to which brightness, blur, and cervical regions were photographed adequately in the image was determined as classification indicators. We compared the classification performance by learning read cervical imaging classified by the algorithm proposed in this paper and unread cervical imaging for deep learning classification model. We evaluate the classification accuracy for unread Cervical imaging of the algorithm by comparing the performance. Images for the algorithm showed higher accuracy of 91.6% on average. It is expected that the algorithm proposed in this paper will improve reliability by effectively excluding unread cervical imaging and ultimately reducing errors in artificial intelligence diagnosis.

Determination of the stage and grade of periodontitis according to the current classification of periodontal and peri-implant diseases and conditions (2018) using machine learning algorithms

  • Kubra Ertas;Ihsan Pence;Melike Siseci Cesmeli;Zuhal Yetkin Ay
    • Journal of Periodontal and Implant Science
    • /
    • 제53권1호
    • /
    • pp.38-53
    • /
    • 2023
  • Purpose: The current Classification of Periodontal and Peri-Implant Diseases and Conditions, published and disseminated in 2018, involves some difficulties and causes diagnostic conflicts due to its criteria, especially for inexperienced clinicians. The aim of this study was to design a decision system based on machine learning algorithms by using clinical measurements and radiographic images in order to determine and facilitate the staging and grading of periodontitis. Methods: In the first part of this study, machine learning models were created using the Python programming language based on clinical data from 144 individuals who presented to the Department of Periodontology, Faculty of Dentistry, Süleyman Demirel University. In the second part, panoramic radiographic images were processed and classification was carried out with deep learning algorithms. Results: Using clinical data, the accuracy of staging with the tree algorithm reached 97.2%, while the random forest and k-nearest neighbor algorithms reached 98.6% accuracy. The best staging accuracy for processing panoramic radiographic images was provided by a hybrid network model algorithm combining the proposed ResNet50 architecture and the support vector machine algorithm. For this, the images were preprocessed, and high success was obtained, with a classification accuracy of 88.2% for staging. However, in general, it was observed that the radiographic images provided a low level of success, in terms of accuracy, for modeling the grading of periodontitis. Conclusions: The machine learning-based decision system presented herein can facilitate periodontal diagnoses despite its current limitations. Further studies are planned to optimize the algorithm and improve the results.