• Title/Summary/Keyword: Medical Deep-learning

Search Result 372, Processing Time 0.025 seconds

Deep-Learning-Based Molecular Imaging Biomarkers: Toward Data-Driven Theranostics

  • Choi, Hongyoon
    • Progress in Medical Physics
    • /
    • v.30 no.2
    • /
    • pp.39-48
    • /
    • 2019
  • Deep learning has been applied to various medical data. In particular, current deep learning models exhibit remarkable performance at specific tasks, sometimes offering higher accuracy than that of experts for discriminating specific diseases from medical images. The current status of deep learning applications to molecular imaging can be divided into a few subtypes in terms of their purposes: differential diagnostic classification, enhancement of image acquisition, and image-based quantification. As functional and pathophysiologic information is key to molecular imaging, this review will emphasize the need for accurate biomarker acquisition by deep learning in molecular imaging. Furthermore, this review addresses practical issues that include clinical validation, data distribution, labeling issues, and harmonization to achieve clinically feasible deep learning models. Eventually, deep learning will enhance the role of theranostics, which aims at precision targeting of pathophysiology by maximizing molecular imaging functional information.

Analysis of Trends of Medical Image Processing based on Deep Learning

  • Seokjin Im
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.1
    • /
    • pp.283-289
    • /
    • 2023
  • AI is bringing about drastic changes not only in the aspect of technologies but also in society and culture. Medical AI based on deep learning have developed rapidly. Especially, the field of medical image analysis has been proven that AI can identify the characteristics of medical images more accurately and quickly than clinicians. Evaluating the latest results of the AI-based medical image processing is important for the implication for the development direction of medical AI. In this paper, we analyze and evaluate the latest trends in AI-based medical image analysis, which is showing great achievements in the field of medical AI in the healthcare industry. We analyze deep learning models for medical image analysis and AI-based medical image segmentation for quantitative analysis. Also, we evaluate the future development direction in terms of marketability as well as the size and characteristics of the medical AI market and the restrictions to market growth. For evaluating the latest trend in the deep learning-based medical image processing, we analyze the latest research results on the deep learning-based medical image processing and data of medical AI market. The analyzed trends provide the overall views and implication for the developing deep learning in the medical fields.

An Open Medical Platform to Share Source Code and Various Pre-Trained Weights for Models to Use in Deep Learning Research

  • Sungchul Kim;Sungman Cho;Kyungjin Cho;Jiyeon Seo;Yujin Nam;Jooyoung Park;Kyuri Kim;Daeun Kim;Jeongeun Hwang;Jihye Yun;Miso Jang;Hyunna Lee;Namkug Kim
    • Korean Journal of Radiology
    • /
    • v.22 no.12
    • /
    • pp.2073-2081
    • /
    • 2021
  • Deep learning-based applications have great potential to enhance the quality of medical services. The power of deep learning depends on open databases and innovation. Radiologists can act as important mediators between deep learning and medicine by simultaneously playing pioneering and gatekeeping roles. The application of deep learning technology in medicine is sometimes restricted by ethical or legal issues, including patient privacy and confidentiality, data ownership, and limitations in patient agreement. In this paper, we present an open platform, MI2RLNet, for sharing source code and various pre-trained weights for models to use in downstream tasks, including education, application, and transfer learning, to encourage deep learning research in radiology. In addition, we describe how to use this open platform in the GitHub environment. Our source code and models may contribute to further deep learning research in radiology, which may facilitate applications in medicine and healthcare, especially in medical imaging, in the near future. All code is available at https://github.com/mi2rl/MI2RLNet.

Medical Image Analysis Using Artificial Intelligence

  • Yoon, Hyun Jin;Jeong, Young Jin;Kang, Hyun;Jeong, Ji Eun;Kang, Do-Young
    • Progress in Medical Physics
    • /
    • v.30 no.2
    • /
    • pp.49-58
    • /
    • 2019
  • Purpose: Automated analytical systems have begun to emerge as a database system that enables the scanning of medical images to be performed on computers and the construction of big data. Deep-learning artificial intelligence (AI) architectures have been developed and applied to medical images, making high-precision diagnosis possible. Materials and Methods: For diagnosis, the medical images need to be labeled and standardized. After pre-processing the data and entering them into the deep-learning architecture, the final diagnosis results can be obtained quickly and accurately. To solve the problem of overfitting because of an insufficient amount of labeled data, data augmentation is performed through rotation, using left and right flips to artificially increase the amount of data. Because various deep-learning architectures have been developed and publicized over the past few years, the results of the diagnosis can be obtained by entering a medical image. Results: Classification and regression are performed by a supervised machine-learning method and clustering and generation are performed by an unsupervised machine-learning method. When the convolutional neural network (CNN) method is applied to the deep-learning layer, feature extraction can be used to classify diseases very efficiently and thus to diagnose various diseases. Conclusions: AI, using a deep-learning architecture, has expertise in medical image analysis of the nerves, retina, lungs, digital pathology, breast, heart, abdomen, and musculo-skeletal system.

Automated Segmentation of Left Ventricular Myocardium on Cardiac Computed Tomography Using Deep Learning

  • Hyun Jung Koo;June-Goo Lee;Ji Yeon Ko;Gaeun Lee;Joon-Won Kang;Young-Hak Kim;Dong Hyun Yang
    • Korean Journal of Radiology
    • /
    • v.21 no.6
    • /
    • pp.660-669
    • /
    • 2020
  • Objective: To evaluate the accuracy of a deep learning-based automated segmentation of the left ventricle (LV) myocardium using cardiac CT. Materials and Methods: To develop a fully automated algorithm, 100 subjects with coronary artery disease were randomly selected as a development set (50 training / 20 validation / 30 internal test). An experienced cardiac radiologist generated the manual segmentation of the development set. The trained model was evaluated using 1000 validation set generated by an experienced technician. Visual assessment was performed to compare the manual and automatic segmentations. In a quantitative analysis, sensitivity and specificity were calculated according to the number of pixels where two three-dimensional masks of the manual and deep learning segmentations overlapped. Similarity indices, such as the Dice similarity coefficient (DSC), were used to evaluate the margin of each segmented masks. Results: The sensitivity and specificity of automated segmentation for each segment (1-16 segments) were high (85.5-100.0%). The DSC was 88.3 ± 6.2%. Among randomly selected 100 cases, all manual segmentation and deep learning masks for visual analysis were classified as very accurate to mostly accurate and there were no inaccurate cases (manual vs. deep learning: very accurate, 31 vs. 53; accurate, 64 vs. 39; mostly accurate, 15 vs. 8). The number of very accurate cases for deep learning masks was greater than that for manually segmented masks. Conclusion: We present deep learning-based automatic segmentation of the LV myocardium and the results are comparable to manual segmentation data with high sensitivity, specificity, and high similarity scores.

Automatically Diagnosing Skull Fractures Using an Object Detection Method and Deep Learning Algorithm in Plain Radiography Images

  • Tae Seok, Jeong;Gi Taek, Yee; Kwang Gi, Kim;Young Jae, Kim;Sang Gu, Lee;Woo Kyung, Kim
    • Journal of Korean Neurosurgical Society
    • /
    • v.66 no.1
    • /
    • pp.53-62
    • /
    • 2023
  • Objective : Deep learning is a machine learning approach based on artificial neural network training, and object detection algorithm using deep learning is used as the most powerful tool in image analysis. We analyzed and evaluated the diagnostic performance of a deep learning algorithm to identify skull fractures in plain radiographic images and investigated its clinical applicability. Methods : A total of 2026 plain radiographic images of the skull (fracture, 991; normal, 1035) were obtained from 741 patients. The RetinaNet architecture was used as a deep learning model. Precision, recall, and average precision were measured to evaluate the deep learning algorithm's diagnostic performance. Results : In ResNet-152, the average precision for intersection over union (IOU) 0.1, 0.3, and 0.5, were 0.7240, 0.6698, and 0.3687, respectively. When the intersection over union (IOU) and confidence threshold were 0.1, the precision was 0.7292, and the recall was 0.7650. When the IOU threshold was 0.1, and the confidence threshold was 0.6, the true and false rates were 82.9% and 17.1%, respectively. There were significant differences in the true/false and false-positive/false-negative ratios between the anterior-posterior, towne, and both lateral views (p=0.032 and p=0.003). Objects detected in false positives had vascular grooves and suture lines. In false negatives, the detection performance of the diastatic fractures, fractures crossing the suture line, and fractures around the vascular grooves and orbit was poor. Conclusion : The object detection algorithm applied with deep learning is expected to be a valuable tool in diagnosing skull fractures.

Learning strategies and deep learning (학습전략과 심층학습)

  • Shin, Hong-Im
    • Korean Medical Education Review
    • /
    • v.11 no.1
    • /
    • pp.35-43
    • /
    • 2009
  • Learning strategies are defined as behaviors and thoughts that a learner engages in during learning and that are intended to influence the learner's encoding process. Today, demands for teaching how to learn increase, because there is a lot of complex material which is delivered to students. But learning strategies shouldn't be identified as tricks of students for achieving high scores in exams. Cognitive researchers and theorists assume that learning strategies are related to two types of learning processing, which are described as 'surface learning' and 'deep learning'. In addition learning strategies are associated with learning motivation. Students with 'meaning orientation' who struggle for deep learning, are intrinsically motivated, whereas students with 'reproduction orientation' or 'achieving orientation' are extrinsically motivated. Therefore, to foster active learning and intrinsic motivation of students, it isn't enough to just teach how to learn. Changes of curriculum and assessment methods, that stimulate deep learning and curiosity of students are needed with educators and learners working cooperatively.

Developing and Evaluating Deep Learning Algorithms for Object Detection: Key Points for Achieving Superior Model Performance

  • Jang-Hoon Oh;Hyug-Gi Kim;Kyung Mi Lee
    • Korean Journal of Radiology
    • /
    • v.24 no.7
    • /
    • pp.698-714
    • /
    • 2023
  • In recent years, artificial intelligence, especially object detection-based deep learning in computer vision, has made significant advancements, driven by the development of computing power and the widespread use of graphic processor units. Object detection-based deep learning techniques have been applied in various fields, including the medical imaging domain, where remarkable achievements have been reported in disease detection. However, the application of deep learning does not always guarantee satisfactory performance, and researchers have been employing trial-and-error to identify the factors contributing to performance degradation and enhance their models. Moreover, due to the black-box problem, the intermediate processes of a deep learning network cannot be comprehended by humans; as a result, identifying problems in a deep learning model that exhibits poor performance can be challenging. This article highlights potential issues that may cause performance degradation at each deep learning step in the medical imaging domain and discusses factors that must be considered to improve the performance of deep learning models. Researchers who wish to begin deep learning research can reduce the required amount of trial-and-error by understanding the issues discussed in this study.

Comparison and analysis of chest X-ray-based deep learning loss function performance (흉부 X-ray 기반 딥 러닝 손실함수 성능 비교·분석)

  • Seo, Jin-Beom;Cho, Young-Bok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.8
    • /
    • pp.1046-1052
    • /
    • 2021
  • Artificial intelligence is being applied in various industrial fields to the development of the fourth industry and the construction of high-performance computing environments. In the medical field, deep learning learning such as cancer, COVID-19, and bone age measurement was performed using medical images such as X-Ray, MRI, and PET and clinical data. In addition, ICT medical fusion technology is being researched by applying smart medical devices, IoT devices and deep learning algorithms. Among these techniques, medical image-based deep learning learning requires accurate finding of medical image biomarkers, minimal loss rate and high accuracy. Therefore, in this paper, we would like to compare and analyze the performance of the Cross-Entropy function used in the image classification algorithm of the loss function that derives the loss rate in the chest X-Ray image-based deep learning learning process.

A Three-Dimensional Deep Convolutional Neural Network for Automatic Segmentation and Diameter Measurement of Type B Aortic Dissection

  • Yitong Yu;Yang Gao;Jianyong Wei;Fangzhou Liao;Qianjiang Xiao;Jie Zhang;Weihua Yin;Bin Lu
    • Korean Journal of Radiology
    • /
    • v.22 no.2
    • /
    • pp.168-178
    • /
    • 2021
  • Objective: To provide an automatic method for segmentation and diameter measurement of type B aortic dissection (TBAD). Materials and Methods: Aortic computed tomography angiographic images from 139 patients with TBAD were consecutively collected. We implemented a deep learning method based on a three-dimensional (3D) deep convolutional neural (CNN) network, which realizes automatic segmentation and measurement of the entire aorta (EA), true lumen (TL), and false lumen (FL). The accuracy, stability, and measurement time were compared between deep learning and manual methods. The intra- and inter-observer reproducibility of the manual method was also evaluated. Results: The mean dice coefficient scores were 0.958, 0.961, and 0.932 for EA, TL, and FL, respectively. There was a linear relationship between the reference standard and measurement by the manual and deep learning method (r = 0.964 and 0.991, respectively). The average measurement error of the deep learning method was less than that of the manual method (EA, 1.64% vs. 4.13%; TL, 2.46% vs. 11.67%; FL, 2.50% vs. 8.02%). Bland-Altman plots revealed that the deviations of the diameters between the deep learning method and the reference standard were -0.042 mm (-3.412 to 3.330 mm), -0.376 mm (-3.328 to 2.577 mm), and 0.026 mm (-3.040 to 3.092 mm) for EA, TL, and FL, respectively. For the manual method, the corresponding deviations were -0.166 mm (-1.419 to 1.086 mm), -0.050 mm (-0.970 to 1.070 mm), and -0.085 mm (-1.010 to 0.084 mm). Intra- and inter-observer differences were found in measurements with the manual method, but not with the deep learning method. The measurement time with the deep learning method was markedly shorter than with the manual method (21.7 ± 1.1 vs. 82.5 ± 16.1 minutes, p < 0.001). Conclusion: The performance of efficient segmentation and diameter measurement of TBADs based on the 3D deep CNN was both accurate and stable. This method is promising for evaluating aortic morphology automatically and alleviating the workload of radiologists in the near future.