• Title/Summary/Keyword: ResNet-50

Search Result 126, Processing Time 0.019 seconds

Evaluating Usefulness of Deep Learning Based Left Ventricle Segmentation in Cardiac Gated Blood Pool Scan (게이트심장혈액풀검사에서 딥러닝 기반 좌심실 영역 분할방법의 유용성 평가)

  • Oh, Joo-Young;Jeong, Eui-Hwan;Lee, Joo-Young;Park, Hoon-Hee
    • Journal of radiological science and technology
    • /
    • v.45 no.2
    • /
    • pp.151-158
    • /
    • 2022
  • The Cardiac Gated Blood Pool (GBP) scintigram, a nuclear medicine imaging, calculates the left ventricular Ejection Fraction (EF) by segmenting the left ventricle from the heart. However, in order to accurately segment the substructure of the heart, specialized knowledge of cardiac anatomy is required, and depending on the expert's processing, there may be a problem in which the left ventricular EF is calculated differently. In this study, using the DeepLabV3 architecture, GBP images were trained on 93 training data with a ResNet-50 backbone. Afterwards, the trained model was applied to 23 separate test sets of GBP to evaluate the reproducibility of the region of interest and left ventricular EF. Pixel accuracy, dice coefficient, and IoU for the region of interest were 99.32±0.20, 94.65±1.45, 89.89±2.62(%) at the diastolic phase, and 99.26±0.34, 90.16±4.19, and 82.33±6.69(%) at the systolic phase, respectively. Left ventricular EF was calculated to be an average of 60.37±7.32% in the ROI set by humans and 58.68±7.22% in the ROI set by the deep learning segmentation model. (p<0.05) The automated segmentation method using deep learning presented in this study similarly predicts the average human-set ROI and left ventricular EF when a random GBP image is an input. If the automatic segmentation method is developed and applied to the functional examination method that needs to set ROI in the field of cardiac scintigram in nuclear medicine in the future, it is expected to greatly contribute to improving the efficiency and accuracy of processing and analysis by nuclear medicine specialists.

A Radiomics-based Unread Cervical Imaging Classification Algorithm (자궁경부 영상에서의 라디오믹스 기반 판독 불가 영상 분류 알고리즘 연구)

  • Kim, Go Eun;Kim, Young Jae;Ju, Woong;Nam, Kyehyun;Kim, Soonyung;Kim, Kwang Gi
    • Journal of Biomedical Engineering Research
    • /
    • v.42 no.5
    • /
    • pp.241-249
    • /
    • 2021
  • Recently, artificial intelligence for diagnosis system of obstetric diseases have been actively studied. Artificial intelligence diagnostic assist systems, which support medical diagnosis benefits of efficiency and accuracy, may experience problems of poor learning accuracy and reliability when inappropriate images are the model's input data. For this reason, before learning, We proposed an algorithm to exclude unread cervical imaging. 2,000 images of read cervical imaging and 257 images of unread cervical imaging were used for this study. Experiments were conducted based on the statistical method Radiomics to extract feature values of the entire images for classification of unread images from the entire images and to obtain a range of read threshold values. The degree to which brightness, blur, and cervical regions were photographed adequately in the image was determined as classification indicators. We compared the classification performance by learning read cervical imaging classified by the algorithm proposed in this paper and unread cervical imaging for deep learning classification model. We evaluate the classification accuracy for unread Cervical imaging of the algorithm by comparing the performance. Images for the algorithm showed higher accuracy of 91.6% on average. It is expected that the algorithm proposed in this paper will improve reliability by effectively excluding unread cervical imaging and ultimately reducing errors in artificial intelligence diagnosis.

Determination of the stage and grade of periodontitis according to the current classification of periodontal and peri-implant diseases and conditions (2018) using machine learning algorithms

  • Kubra Ertas;Ihsan Pence;Melike Siseci Cesmeli;Zuhal Yetkin Ay
    • Journal of Periodontal and Implant Science
    • /
    • v.53 no.1
    • /
    • pp.38-53
    • /
    • 2023
  • Purpose: The current Classification of Periodontal and Peri-Implant Diseases and Conditions, published and disseminated in 2018, involves some difficulties and causes diagnostic conflicts due to its criteria, especially for inexperienced clinicians. The aim of this study was to design a decision system based on machine learning algorithms by using clinical measurements and radiographic images in order to determine and facilitate the staging and grading of periodontitis. Methods: In the first part of this study, machine learning models were created using the Python programming language based on clinical data from 144 individuals who presented to the Department of Periodontology, Faculty of Dentistry, Süleyman Demirel University. In the second part, panoramic radiographic images were processed and classification was carried out with deep learning algorithms. Results: Using clinical data, the accuracy of staging with the tree algorithm reached 97.2%, while the random forest and k-nearest neighbor algorithms reached 98.6% accuracy. The best staging accuracy for processing panoramic radiographic images was provided by a hybrid network model algorithm combining the proposed ResNet50 architecture and the support vector machine algorithm. For this, the images were preprocessed, and high success was obtained, with a classification accuracy of 88.2% for staging. However, in general, it was observed that the radiographic images provided a low level of success, in terms of accuracy, for modeling the grading of periodontitis. Conclusions: The machine learning-based decision system presented herein can facilitate periodontal diagnoses despite its current limitations. Further studies are planned to optimize the algorithm and improve the results.

Estimation of Rice Heading Date of Paddy Rice from Slanted and Top-view Images Using Deep Learning Classification Model (딥 러닝 분류 모델을 이용한 직하방과 경사각 영상 기반의 벼 출수기 판별)

  • Hyeok-jin Bak;Wan-Gyu Sang;Sungyul Chang;Dongwon Kwon;Woo-jin Im;Ji-hyeon Lee;Nam-jin Chung;Jung-Il Cho
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.337-345
    • /
    • 2023
  • Estimating the rice heading date is one of the most crucial agricultural tasks related to productivity. However, due to abnormal climates around the world, it is becoming increasingly challenging to estimate the rice heading date. Therefore, a more objective classification method for estimating the rice heading date is needed than the existing methods. This study, we aimed to classify the rice heading stage from various images using a CNN classification model. We collected top-view images taken from a drone and a phenotyping tower, as well as slanted-view images captured with a RGB camera. The collected images underwent preprocessing to prepare them as input data for the CNN model. The CNN architectures employed were ResNet50, InceptionV3, and VGG19, which are commonly used in image classification models. The accuracy of the models all showed an accuracy of 0.98 or higher regardless of each architecture and type of image. We also used Grad-CAM to visually check which features of the image the model looked at and classified. Then verified our model accurately measure the rice heading date in paddy fields. The rice heading date was estimated to be approximately one day apart on average in the four paddy fields. This method suggests that the water head can be estimated automatically and quantitatively when estimating the rice heading date from various paddy field monitoring images.

Analysis of Infrared Characteristics According to Common Depth Using RP Images Converted into Numerical Data (수치 데이터로 변환된 RP 이미지를 활용하여 공동 깊이에 따른 적외선 특성 분석)

  • Jang, Byeong-Su;Kim, YoungSeok;Kim, Sewon;Choi, Hyun-Jun;Yoon, Hyung-Koo
    • Journal of the Korean Geotechnical Society
    • /
    • v.40 no.3
    • /
    • pp.77-84
    • /
    • 2024
  • Aging and damaged underground utilities cause cavity and ground subsidence under roads, which can cause economic losses and risk user safety. This study used infrared cameras to assess the thermal characteristics of such cavities and evaluate their reliability using a CNN algorithm. PVC pipes were embedded at various depths in a test site measuring 400 cm × 50 cm × 40 cm. Concrete blocks were used to simulate road surfaces, and measurements were taken from 4 PM to noon the following day. The initial temperatures measured by the infrared camera were 43.7℃, 43.8℃, and 41.9℃, reflecting atmospheric temperature changes during the measurement period. The RP algorithm generates images in four resolutions, i.e., 10,000 × 10,000, 2,000 × 2,000, 1,000 × 1,000, and 100 × 100 pixels. The accuracy of the CNN model using RP images as input was 99%, 97%, 98%, and 96%, respectively. These results represent a considerable improvement over the 73% accuracy obtained using time-series images, with an improvement greater than 20% when using the RP algorithm-based inputs.

Evaluation of Transfer Learning in Gastroscopy Image Classification using Convolutional Neual Network (합성곱 신경망을 활용한 위내시경 이미지 분류에서 전이학습의 효용성 평가)

  • Park, Sung Jin;Kim, Young Jae;Park, Dong Kyun;Chung, Jun Won;Kim, Kwang Gi
    • Journal of Biomedical Engineering Research
    • /
    • v.39 no.5
    • /
    • pp.213-219
    • /
    • 2018
  • Stomach cancer is the most diagnosed cancer in Korea. When gastric cancer is detected early, the 5-year survival rate is as high as 90%. Gastroscopy is a very useful method for early diagnosis. But the false negative rate of gastric cancer in the gastroscopy was 4.6~25.8% due to the subjective judgment of the physician. Recently, the image classification performance of the image recognition field has been advanced by the convolutional neural network. Convolutional neural networks perform well when diverse and sufficient amounts of data are supported. However, medical data is not easy to access and it is difficult to gather enough high-quality data that includes expert annotations. So This paper evaluates the efficacy of transfer learning in gastroscopy classification and diagnosis. We obtained 787 endoscopic images of gastric endoscopy at Gil Medical Center, Gachon University. The number of normal images was 200, and the number of abnormal images was 587. The image size was reconstructed and normalized. In the case of the ResNet50 structure, the classification accuracy before and after applying the transfer learning was improved from 0.9 to 0.947, and the AUC was also improved from 0.94 to 0.98. In the case of the InceptionV3 structure, the classification accuracy before and after applying the transfer learning was improved from 0.862 to 0.924, and the AUC was also improved from 0.89 to 0.97. In the case of the VGG16 structure, the classification accuracy before and after applying the transfer learning was improved from 0.87 to 0.938, and the AUC was also improved from 0.89 to 0.98. The difference in the performance of the CNN model before and after transfer learning was statistically significant when confirmed by T-test (p < 0.05). As a result, transfer learning is judged to be an effective method of medical data that is difficult to collect good quality data.