• Title/Summary/Keyword: Deep learning (DL)

Search Result 115, Processing Time 0.022 seconds

Bone Suppression on Chest Radiographs for Pulmonary Nodule Detection: Comparison between a Generative Adversarial Network and Dual-Energy Subtraction

  • Kyungsoo Bae;Dong Yul Oh;Il Dong Yun;Kyung Nyeo Jeon
    • Korean Journal of Radiology
    • /
    • v.23 no.1
    • /
    • pp.139-149
    • /
    • 2022
  • Objective: To compare the effects of bone suppression imaging using deep learning (BSp-DL) based on a generative adversarial network (GAN) and bone subtraction imaging using a dual energy technique (BSt-DE) on radiologists' performance for pulmonary nodule detection on chest radiographs (CXRs). Materials and Methods: A total of 111 adults, including 49 patients with 83 pulmonary nodules, who underwent both CXR using the dual energy technique and chest CT, were enrolled. Using CT as a reference, two independent radiologists evaluated CXR images for the presence or absence of pulmonary nodules in three reading sessions (standard CXR, BSt-DE CXR, and BSp-DL CXR). Person-wise and nodule-wise performances were assessed using receiver-operating characteristic (ROC) and alternative free-response ROC (AFROC) curve analyses, respectively. Subgroup analyses based on nodule size, location, and the presence of overlapping bones were performed. Results: BSt-DE with an area under the AFROC curve (AUAFROC) of 0.996 and 0.976 for readers 1 and 2, respectively, and BSp-DL with AUAFROC of 0.981 and 0.958, respectively, showed better nodule-wise performance than standard CXR (AUAFROC of 0.907 and 0.808, respectively; p ≤ 0.005). In the person-wise analysis, BSp-DL with an area under the ROC curve (AUROC) of 0.984 and 0.931 for readers 1 and 2, respectively, showed better performance than standard CXR (AUROC of 0.915 and 0.798, respectively; p ≤ 0.011) and comparable performance to BSt-DE (AUROC of 0.988 and 0.974; p ≥ 0.064). BSt-DE and BSp-DL were superior to standard CXR for detecting nodules overlapping with bones (p < 0.017) or in the upper/middle lung zone (p < 0.017). BSt-DE was superior (p < 0.017) to BSp-DL in detecting peripheral and sub-centimeter nodules. Conclusion: BSp-DL (GAN-based bone suppression) showed comparable performance to BSt-DE and can improve radiologists' performance in detecting pulmonary nodules on CXRs. Nevertheless, for better delineation of small and peripheral nodules, further technical improvements are required.

Study on data preprocessing methods for considering snow accumulation and snow melt in dam inflow prediction using machine learning & deep learning models (머신러닝&딥러닝 모델을 활용한 댐 일유입량 예측시 융적설을 고려하기 위한 데이터 전처리에 대한 방법 연구)

  • Jo, Youngsik;Jung, Kwansue
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.1
    • /
    • pp.35-44
    • /
    • 2024
  • Research in dam inflow prediction has actively explored the utilization of data-driven machine learning and deep learning (ML&DL) tools across diverse domains. Enhancing not just the inherent model performance but also accounting for model characteristics and preprocessing data are crucial elements for precise dam inflow prediction. Particularly, existing rainfall data, derived from snowfall amounts through heating facilities, introduces distortions in the correlation between snow accumulation and rainfall, especially in dam basins influenced by snow accumulation, such as Soyang Dam. This study focuses on the preprocessing of rainfall data essential for the application of ML&DL models in predicting dam inflow in basins affected by snow accumulation. This is vital to address phenomena like reduced outflow during winter due to low snowfall and increased outflow during spring despite minimal or no rain, both of which are physical occurrences. Three machine learning models (SVM, RF, LGBM) and two deep learning models (LSTM, TCN) were built by combining rainfall and inflow series. With optimal hyperparameter tuning, the appropriate model was selected, resulting in a high level of predictive performance with NSE ranging from 0.842 to 0.894. Moreover, to generate rainfall correction data considering snow accumulation, a simulated snow accumulation algorithm was developed. Applying this correction to machine learning and deep learning models yielded NSE values ranging from 0.841 to 0.896, indicating a similarly high level of predictive performance compared to the pre-snow accumulation application. Notably, during the snow accumulation period, adjusting rainfall during the training phase was observed to lead to a more accurate simulation of observed inflow when predicted. This underscores the importance of thoughtful data preprocessing, taking into account physical factors such as snowfall and snowmelt, in constructing data models.

Machine Learning Approaches to Corn Yield Estimation Using Satellite Images and Climate Data: A Case of Iowa State

  • Kim, Nari;Lee, Yang-Won
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.4
    • /
    • pp.383-390
    • /
    • 2016
  • Remote sensing data has been widely used in the estimation of crop yields by employing statistical methods such as regression model. Machine learning, which is an efficient empirical method for classification and prediction, is another approach to crop yield estimation. This paper described the corn yield estimation in Iowa State using four machine learning approaches such as SVM (Support Vector Machine), RF (Random Forest), ERT (Extremely Randomized Trees) and DL (Deep Learning). Also, comparisons of the validation statistics among them were presented. To examine the seasonal sensitivities of the corn yields, three period groups were set up: (1) MJJAS (May to September), (2) JA (July and August) and (3) OC (optimal combination of month). In overall, the DL method showed the highest accuracies in terms of the correlation coefficient for the three period groups. The accuracies were relatively favorable in the OC group, which indicates the optimal combination of month can be significant in statistical modeling of crop yields. The differences between our predictions and USDA (United States Department of Agriculture) statistics were about 6-8 %, which shows the machine learning approaches can be a viable option for crop yield modeling. In particular, the DL showed more stable results by overcoming the overfitting problem of generic machine learning methods.

Deep Convolution Neural Networks in Computer Vision: a Review

  • Yoo, Hyeon-Joong
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.1
    • /
    • pp.35-43
    • /
    • 2015
  • Over the past couple of years, tremendous progress has been made in applying deep learning (DL) techniques to computer vision. Especially, deep convolutional neural networks (DCNNs) have achieved state-of-the-art performance on standard recognition datasets and tasks such as ImageNet Large-Scale Visual Recognition Challenge (ILSVRC). Among them, GoogLeNet network which is a radically redesigned DCNN based on the Hebbian principle and scale invariance set the new state of the art for classification and detection in the ILSVRC 2014. Since there exist various deep learning techniques, this review paper is focusing on techniques directly related to DCNNs, especially those needed to understand the architecture and techniques employed in GoogLeNet network.

Artificial Intelligence in Neuroimaging: Clinical Applications

  • Choi, Kyu Sung;Sunwoo, Leonard
    • Investigative Magnetic Resonance Imaging
    • /
    • v.26 no.1
    • /
    • pp.1-9
    • /
    • 2022
  • Artificial intelligence (AI) powered by deep learning (DL) has shown remarkable progress in image recognition tasks. Over the past decade, AI has proven its feasibility for applications in medical imaging. Various aspects of clinical practice in neuroimaging can be improved with the help of AI. For example, AI can aid in detecting brain metastases, predicting treatment response of brain tumors, generating a parametric map of dynamic contrast-enhanced MRI, and enhancing radiomics research by extracting salient features from input images. In addition, image quality can be improved via AI-based image reconstruction or motion artifact reduction. In this review, we summarize recent clinical applications of DL in various aspects of neuroimaging.

Automated Measurement of Native T1 and Extracellular Volume Fraction in Cardiac Magnetic Resonance Imaging Using a Commercially Available Deep Learning Algorithm

  • Suyon Chang;Kyunghwa Han;Suji Lee;Young Joong Yang;Pan Ki Kim;Byoung Wook Choi;Young Joo Suh
    • Korean Journal of Radiology
    • /
    • v.23 no.12
    • /
    • pp.1251-1259
    • /
    • 2022
  • Objective: T1 mapping provides valuable information regarding cardiomyopathies. Manual drawing is time consuming and prone to subjective errors. Therefore, this study aimed to test a DL algorithm for the automated measurement of native T1 and extracellular volume (ECV) fractions in cardiac magnetic resonance (CMR) imaging with a temporally separated dataset. Materials and Methods: CMR images obtained for 95 participants (mean age ± standard deviation, 54.5 ± 15.2 years), including 36 left ventricular hypertrophy (12 hypertrophic cardiomyopathy, 12 Fabry disease, and 12 amyloidosis), 32 dilated cardiomyopathy, and 27 healthy volunteers, were included. A commercial deep learning (DL) algorithm based on 2D U-net (Myomics-T1 software, version 1.0.0) was used for the automated analysis of T1 maps. Four radiologists, as study readers, performed manual analysis. The reference standard was the consensus result of the manual analysis by two additional expert readers. The segmentation performance of the DL algorithm and the correlation and agreement between the automated measurement and the reference standard were assessed. Interobserver agreement among the four radiologists was analyzed. Results: DL successfully segmented the myocardium in 99.3% of slices in the native T1 map and 89.8% of slices in the post-T1 map with Dice similarity coefficients of 0.86 ± 0.05 and 0.74 ± 0.17, respectively. Native T1 and ECV showed strong correlation and agreement between DL and the reference: for T1, r = 0.967 (95% confidence interval [CI], 0.951-0.978) and bias of 9.5 msec (95% limits of agreement [LOA], -23.6-42.6 msec); for ECV, r = 0.987 (95% CI, 0.980-0.991) and bias of 0.7% (95% LOA, -2.8%-4.2%) on per-subject basis. Agreements between DL and each of the four radiologists were excellent (intraclass correlation coefficient [ICC] of 0.98-0.99 for both native T1 and ECV), comparable to the pairwise agreement between the radiologists (ICC of 0.97-1.00 and 0.99-1.00 for native T1 and ECV, respectively). Conclusion: The DL algorithm allowed automated T1 and ECV measurements comparable to those of radiologists.

Deep Neural Network-Based Critical Packet Inspection for Improving Traffic Steering in Software-Defined IoT

  • Tam, Prohim;Math, Sa;Kim, Seokhoon
    • Journal of Internet Computing and Services
    • /
    • v.22 no.6
    • /
    • pp.1-8
    • /
    • 2021
  • With the rapid growth of intelligent devices and communication technologies, 5G network environment has become more heterogeneous and complex in terms of service management and orchestration. 5G architecture requires supportive technologies to handle the existing challenges for improving the Quality of Service (QoS) and the Quality of Experience (QoE) performances. Among many challenges, traffic steering is one of the key elements which requires critically developing an optimal solution for smart guidance, control, and reliable system. Mobile edge computing (MEC), software-defined networking (SDN), network functions virtualization (NFV), and deep learning (DL) play essential roles to complementary develop a flexible computation and extensible flow rules management in this potential aspect. In this proposed system, an accurate flow recommendation, a centralized control, and a reliable distributed connectivity based on the inspection of packet condition are provided. With the system deployment, the packet is classified separately and recommended to request from the optimal destination with matched preferences and conditions. To evaluate the proposed scheme outperformance, a network simulator software was used to conduct and capture the end-to-end QoS performance metrics. SDN flow rules installation was experimented to illustrate the post control function corresponding to DL-based output. The intelligent steering for network communication traffic is cooperatively configured in SDN controller and NFV-orchestrator to lead a variety of beneficial factors for improving massive real-time Internet of Things (IoT) performance.

Building Detection by Convolutional Neural Network with Infrared Image, LiDAR Data and Characteristic Information Fusion (적외선 영상, 라이다 데이터 및 특성정보 융합 기반의 합성곱 인공신경망을 이용한 건물탐지)

  • Cho, Eun Ji;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.6
    • /
    • pp.635-644
    • /
    • 2020
  • Object recognition, detection and instance segmentation based on DL (Deep Learning) have being used in various practices, and mainly optical images are used as training data for DL models. The major objective of this paper is object segmentation and building detection by utilizing multimodal datasets as well as optical images for training Detectron2 model that is one of the improved R-CNN (Region-based Convolutional Neural Network). For the implementation, infrared aerial images, LiDAR data, and edges from the images, and Haralick features, that are representing statistical texture information, from LiDAR (Light Detection And Ranging) data were generated. The performance of the DL models depends on not only on the amount and characteristics of the training data, but also on the fusion method especially for the multimodal data. The results of segmenting objects and detecting buildings by applying hybrid fusion - which is a mixed method of early fusion and late fusion - results in a 32.65% improvement in building detection rate compared to training by optical image only. The experiments demonstrated complementary effect of the training multimodal data having unique characteristics and fusion strategy.

Harnessing Deep Learning for Abnormal Respiratory Sound Detection (이상 호흡음 탐지를 위한 딥러닝 활용)

  • Gyurin Byun;Huigyu Yang;Hyunseung Choo
    • Annual Conference of KIPS
    • /
    • 2023.11a
    • /
    • pp.641-643
    • /
    • 2023
  • Deep Learning(DL)을 사용한 호흡음의 자동 분석은 폐 질환의 조기 진단에 중추적인 역할을 한다. 그러나 현재의 DL 방법은 종종 호흡음의 공간적 및 시간적 특성을 분리하여 검사하기 때문에 한계가 있다. 본 연구는 컨볼루션 연산을 통해 공간적 특징을 캡처하고 시간 컨볼루션 네트워크를 사용하여 이러한 특징의 공간적-시간적 상관 관계를 활용하는 새로운 DL 프레임워크를 제한한다. 제안된 프레임워크는 앙상블 학습 접근법 내에 컨볼루션 네트워크를 통합하여 폐음 녹음에서 호흡 이상 및 질병을 검출하는 정확도를 크게 향상시킨다. 잘 알려진 ICBHI 2017 챌린지 데이터 세트에 대한 실험은 제안된 프레임워크가 호흡 이상 및 질병 검출을 위한 4-Class 작업에서 비교모델 성능보다 우수함을 보여준다. 특히 민감도와 특이도를 나타내는 점수 메트릭 측면에서 최대 45.91%와 14.1%의 개선이 이진 및 다중 클래스 호흡 이상 감지 작업에서 각각 보여준다. 이러한 결과는 기존 기술보다 우리 방법의 두드러진 이점을 강조하여 호흡기 의료 기술의 미래 혁신을 주도할 수 있는 잠재력을 보여준다.

Optimization of Action Recognition based on Slowfast Deep Learning Model using RGB Video Data (RGB 비디오 데이터를 이용한 Slowfast 모델 기반 이상 행동 인식 최적화)

  • Jeong, Jae-Hyeok;Kim, Min-Suk
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.8
    • /
    • pp.1049-1058
    • /
    • 2022
  • HAR(Human Action Recognition) such as anomaly and object detection has become a trend in research field(s) that focus on utilizing Artificial Intelligence (AI) methods to analyze patterns of human action in crime-ridden area(s), media services, and industrial facilities. Especially, in real-time system(s) using video streaming data, HAR has become a more important AI-based research field in application development and many different research fields using HAR have currently been developed and improved. In this paper, we propose and analyze a deep-learning-based HAR that provides more efficient scheme(s) using an intelligent AI models, such system can be applied to media services using RGB video streaming data usage without feature extraction pre-processing. For the method, we adopt Slowfast based on the Deep Neural Network(DNN) model under an open dataset(HMDB-51 or UCF101) for improvement in prediction accuracy.