• Title/Summary/Keyword: Deep learning segmentation

Search Result 412, Processing Time 0.027 seconds

Clinical Implementation of Deep Learning in Thoracic Radiology: Potential Applications and Challenges

  • Eui Jin Hwang;Chang Min Park
    • Korean Journal of Radiology
    • /
    • v.21 no.5
    • /
    • pp.511-525
    • /
    • 2020
  • Chest X-ray radiography and computed tomography, the two mainstay modalities in thoracic radiology, are under active investigation with deep learning technology, which has shown promising performance in various tasks, including detection, classification, segmentation, and image synthesis, outperforming conventional methods and suggesting its potential for clinical implementation. However, the implementation of deep learning in daily clinical practice is in its infancy and facing several challenges, such as its limited ability to explain the output results, uncertain benefits regarding patient outcomes, and incomplete integration in daily workflow. In this review article, we will introduce the potential clinical applications of deep learning technology in thoracic radiology and discuss several challenges for its implementation in daily clinical practice.

Revolutionizing Brain Tumor Segmentation in MRI with Dynamic Fusion of Handcrafted Features and Global Pathway-based Deep Learning

  • Faizan Ullah;Muhammad Nadeem;Mohammad Abrar
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.1
    • /
    • pp.105-125
    • /
    • 2024
  • Gliomas are the most common malignant brain tumor and cause the most deaths. Manual brain tumor segmentation is expensive, time-consuming, error-prone, and dependent on the radiologist's expertise and experience. Manual brain tumor segmentation outcomes by different radiologists for the same patient may differ. Thus, more robust, and dependable methods are needed. Medical imaging researchers produced numerous semi-automatic and fully automatic brain tumor segmentation algorithms using ML pipelines and accurate (handcrafted feature-based, etc.) or data-driven strategies. Current methods use CNN or handmade features such symmetry analysis, alignment-based features analysis, or textural qualities. CNN approaches provide unsupervised features, while manual features model domain knowledge. Cascaded algorithms may outperform feature-based or data-driven like CNN methods. A revolutionary cascaded strategy is presented that intelligently supplies CNN with past information from handmade feature-based ML algorithms. Each patient receives manual ground truth and four MRI modalities (T1, T1c, T2, and FLAIR). Handcrafted characteristics and deep learning are used to segment brain tumors in a Global Convolutional Neural Network (GCNN). The proposed GCNN architecture with two parallel CNNs, CSPathways CNN (CSPCNN) and MRI Pathways CNN (MRIPCNN), segmented BraTS brain tumors with high accuracy. The proposed model achieved a Dice score of 87% higher than the state of the art. This research could improve brain tumor segmentation, helping clinicians diagnose and treat patients.

Sea Ice Type Classification with Optical Remote Sensing Data (광학영상에서의 해빙종류 분류 연구)

  • Chi, Junhwa;Kim, Hyun-cheol
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_2
    • /
    • pp.1239-1249
    • /
    • 2018
  • Optical remote sensing sensors provide visually more familiar images than radar images. However, it is difficult to discriminate sea ice types in optical images using spectral information based machine learning algorithms. This study addresses two topics. First, we propose a semantic segmentation which is a part of the state-of-the-art deep learning algorithms to identify ice types by learning hierarchical and spatial features of sea ice. Second, we propose a new approach by combining of semi-supervised and active learning to obtain accurate and meaningful labels from unlabeled or unseen images to improve the performance of supervised classification for multiple images. Therefore, we successfully added new labels from unlabeled data to automatically update the semantic segmentation model. This should be noted that an operational system to generate ice type products from optical remote sensing data may be possible in the near future.

Tongue Image Segmentation Using CNN and Various Image Augmentation Techniques (콘볼루션 신경망(CNN)과 다양한 이미지 증강기법을 이용한 혀 영역 분할)

  • Ahn, Ilkoo;Bae, Kwang-Ho;Lee, Siwoo
    • Journal of Biomedical Engineering Research
    • /
    • v.42 no.5
    • /
    • pp.201-210
    • /
    • 2021
  • In Korean medicine, tongue diagnosis is one of the important diagnostic methods for diagnosing abnormalities in the body. Representative features that are used in the tongue diagnosis include color, shape, texture, cracks, and tooth marks. When diagnosing a patient through these features, the diagnosis criteria may be different for each oriental medical doctor, and even the same person may have different diagnosis results depending on time and work environment. In order to overcome this problem, recent studies to automate and standardize tongue diagnosis using machine learning are continuing and the basic process of such a machine learning-based tongue diagnosis system is tongue segmentation. In this paper, image data is augmented based on the main tongue features, and backbones of various famous deep learning architecture models are used for automatic tongue segmentation. The experimental results show that the proposed augmentation technique improves the accuracy of tongue segmentation, and that automatic tongue segmentation can be performed with a high accuracy of 99.12%.

2-Step Structural Damage Analysis Based on Foundation Model for Structural Condition Assessment (시설물 상태평가를 위한 파운데이션 모델 기반 2-Step 시설물 손상 분석)

  • Hyunsoo Park;Hwiyoung Kim ;Dongki Chung
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.621-635
    • /
    • 2023
  • The assessment of structural condition is a crucial process for evaluating its usability and determining the diagnostic cycle. The currently employed manpower-based methods suffer from issues related to safety, efficiency, and objectivity. To address these concerns, research based on deep learning using images is being conducted. However, acquiring structural damage data is challenging, making it difficult to construct a substantial amount of training data, thus limiting the effectiveness of deep learning-based condition assessment. In this study, we propose a foundation model-based 2-step structural damage analysis to overcome the lack of training data in image-based structural condition assessments. We subdivided the elements of structural condition assessment into instantiation and quantification. In the quantification step, we applied a foundation model for image segmentation. Our method demonstrated a 10%-point increase in mean intersection over union compared to conventional image segmentation techniques, with a notable 40%-point improvement in the case of rebar exposure. We anticipate that our proposed approach will enhance performance in domains where acquiring training data is challenging.

Pyramidal Deep Neural Networks for the Accurate Segmentation and Counting of Cells in Microscopy Data

  • Vununu, Caleb;Kang, Kyung-Won;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.3
    • /
    • pp.335-348
    • /
    • 2019
  • Cell segmentation and counting represent one of the most important tasks required in order to provide an exhaustive understanding of biological images. Conventional features suffer the lack of spatial consistency by causing the joining of the cells and, thus, complicating the cell counting task. We propose, in this work, a cascade of networks that take as inputs different versions of the original image. After constructing a Gaussian pyramid representation of the microscopy data, the inputs of different size and spatial resolution are given to a cascade of deep convolutional autoencoders whose task is to reconstruct the segmentation mask. The coarse masks obtained from the different networks are summed up in order to provide the final mask. The principal and main contribution of this work is to propose a novel method for the cell counting. Unlike the majority of the methods that use the obtained segmentation mask as the prior information for counting, we propose to utilize the hidden latent representations, often called the high-level features, as the inputs of a neural network based regressor. While the segmentation part of our method performs as good as the conventional deep learning methods, the proposed cell counting approach outperforms the state-of-the-art methods.

Deep learning-based apical lesion segmentation from panoramic radiographs

  • Il-Seok, Song;Hak-Kyun, Shin;Ju-Hee, Kang;Jo-Eun, Kim;Kyung-Hoe, Huh;Won-Jin, Yi;Sam-Sun, Lee;Min-Suk, Heo
    • Imaging Science in Dentistry
    • /
    • v.52 no.4
    • /
    • pp.351-357
    • /
    • 2022
  • Purpose: Convolutional neural networks (CNNs) have rapidly emerged as one of the most promising artificial intelligence methods in the field of medical and dental research. CNNs can provide an effective diagnostic methodology allowing for the detection of early-staged diseases. Therefore, this study aimed to evaluate the performance of a deep CNN algorithm for apical lesion segmentation from panoramic radiographs. Materials and Methods: A total of 1000 panoramic images showing apical lesions were separated into training (n=800, 80%), validation (n=100, 10%), and test (n=100, 10%) datasets. The performance of identifying apical lesions was evaluated by calculating the precision, recall, and F1-score. Results: In the test group of 180 apical lesions, 147 lesions were segmented from panoramic radiographs with an intersection over union (IoU) threshold of 0.3. The F1-score values, as a measure of performance, were 0.828, 0.815, and 0.742, respectively, with IoU thresholds of 0.3, 0.4, and 0.5. Conclusion: This study showed the potential utility of a deep learning-guided approach for the segmentation of apical lesions. The deep CNN algorithm using U-Net demonstrated considerably high performance in detecting apical lesions.

Analysis of Trends of Medical Image Processing based on Deep Learning

  • Seokjin Im
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.1
    • /
    • pp.283-289
    • /
    • 2023
  • AI is bringing about drastic changes not only in the aspect of technologies but also in society and culture. Medical AI based on deep learning have developed rapidly. Especially, the field of medical image analysis has been proven that AI can identify the characteristics of medical images more accurately and quickly than clinicians. Evaluating the latest results of the AI-based medical image processing is important for the implication for the development direction of medical AI. In this paper, we analyze and evaluate the latest trends in AI-based medical image analysis, which is showing great achievements in the field of medical AI in the healthcare industry. We analyze deep learning models for medical image analysis and AI-based medical image segmentation for quantitative analysis. Also, we evaluate the future development direction in terms of marketability as well as the size and characteristics of the medical AI market and the restrictions to market growth. For evaluating the latest trend in the deep learning-based medical image processing, we analyze the latest research results on the deep learning-based medical image processing and data of medical AI market. The analyzed trends provide the overall views and implication for the developing deep learning in the medical fields.

Development of Deep Learning-Based Damage Detection Prototype for Concrete Bridge Condition Evaluation (콘크리트 교량 상태평가를 위한 딥러닝 기반 손상 탐지 프로토타입 개발)

  • Nam, Woo-Suk;Jung, Hyunjun;Park, Kyung-Han;Kim, Cheol-Min;Kim, Gyu-Seon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.42 no.1
    • /
    • pp.107-116
    • /
    • 2022
  • Recently, research has been actively conducted on the technology of inspection facilities through image-based analysis assessment of human-inaccessible facilities. This research was conducted to study the conditions of deep learning-based imaging data on bridges and to develop an evaluation prototype program for bridges. To develop a deep learning-based bridge damage detection prototype, the Semantic Segmentation model, which enables damage detection and quantification among deep learning models, applied Mask-RCNN and constructed learning data 5,140 (including open-data) and labeling suitable for damage types. As a result of performance modeling verification, precision and reproduction rate analysis of concrete cracks, stripping/slapping, rebar exposure and paint stripping showed that the precision was 95.2 %, and the recall was 93.8 %. A 2nd performance verification was performed on onsite data of crack concrete using damage rate of bridge members.

A Study on the Performance of Enhanced Deep Fully Convolutional Neural Network Algorithm for Image Object Segmentation in Autonomous Driving Environment (자율주행 환경에서 이미지 객체 분할을 위한 강화된 DFCN 알고리즘 성능연구)

  • Kim, Yeonggwang;Kim, Jinsul
    • Smart Media Journal
    • /
    • v.9 no.4
    • /
    • pp.9-16
    • /
    • 2020
  • Recently, various studies are being conducted to integrate Image Segmentation into smart factory industries and autonomous driving fields. In particular, Image Segmentation systems using deep learning algorithms have been researched and developed enough to learn from large volumes of data with higher accuracy. In order to use image segmentation in the autonomous driving sector, sufficient amount of learning is needed with large amounts of data and the streaming environment that processes drivers' data in real time is important for the accuracy of safe operation through highways and child protection zones. Therefore, we proposed a novel DFCN algorithm that enhanced existing FCN algorithms that could be applied to various road environments, demonstrated that the performance of the DFCN algorithm improved 1.3% in terms of "loss" value compared to the previous FCN algorithms. Moreover, the proposed DFCN algorithm was applied to the existing U-Net algorithm to maintain the information of frequencies in the image to produce better results, resulting in a better performance than the classical FCN algorithm in the autonomous environment.