• Title/Summary/Keyword: interpretable deep learning

Search Result 9, Processing Time 0.026 seconds

Deep Interpretable Learning for a Rapid Response System (긴급대응 시스템을 위한 심층 해석 가능 학습)

  • Nguyen, Trong-Nghia;Vo, Thanh-Hung;Kho, Bo-Gun;Lee, Guee-Sang;Yang, Hyung-Jeong;Kim, Soo-Hyung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.805-807
    • /
    • 2021
  • In-hospital cardiac arrest is a significant problem for medical systems. Although the traditional early warning systems have been widely applied, they still contain many drawbacks, such as the high false warning rate and low sensitivity. This paper proposed a strategy that involves a deep learning approach based on a novel interpretable deep tabular data learning architecture, named TabNet, for the Rapid Response System. This study has been processed and validated on a dataset collected from two hospitals of Chonnam National University, Korea, in over 10 years. The learning metrics used for the experiment are the area under the receiver operating characteristic curve score (AUROC) and the area under the precision-recall curve score (AUPRC). The experiment on a large real-time dataset shows that our method improves compared to other machine learning-based approaches.

Collective Navigation Through a Narrow Gap for a Swarm of UAVs Using Curriculum-Based Deep Reinforcement Learning (커리큘럼 기반 심층 강화학습을 이용한 좁은 틈을 통과하는 무인기 군집 내비게이션)

  • Myong-Yol Choi;Woojae Shin;Minwoo Kim;Hwi-Sung Park;Youngbin You;Min Lee;Hyondong Oh
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.1
    • /
    • pp.117-129
    • /
    • 2024
  • This paper introduces collective navigation through a narrow gap using a curriculum-based deep reinforcement learning algorithm for a swarm of unmanned aerial vehicles (UAVs). Collective navigation in complex environments is essential for various applications such as search and rescue, environment monitoring and military tasks operations. Conventional methods, which are easily interpretable from an engineering perspective, divide the navigation tasks into mapping, planning, and control; however, they struggle with increased latency and unmodeled environmental factors. Recently, learning-based methods have addressed these problems by employing the end-to-end framework with neural networks. Nonetheless, most existing learning-based approaches face challenges in complex scenarios particularly for navigating through a narrow gap or when a leader or informed UAV is unavailable. Our approach uses the information of a certain number of nearest neighboring UAVs and incorporates a task-specific curriculum to reduce learning time and train a robust model. The effectiveness of the proposed algorithm is verified through an ablation study and quantitative metrics. Simulation results demonstrate that our approach outperforms existing methods.

Interpretable Visual Question Answering via Explain Sentence Generation (설명 문장 생성을 통한 해석 가능한 시각적 질의응답 모델 분석)

  • Kim, Danil;Han, Bohyung
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.359-362
    • /
    • 2020
  • 본 연구에서는 설명 문장 생성을 통한 해석 가능한 시각적 질의응답 모델을 설계하고 학습 방법을 제시한다. 설명 문장은 시각적 질의응답 모델이 응답을 예측하는 데에 필요한 이미지 및 질문 정보와 적절한 논리적인 정보의 조합 및 정답 추론 과정이 함의되어 있을 것으로 기대한다. 설명 문장 생성 과정이 포함된 시각적 질의응답의 기본적인 모델을 기반으로 여러 가지 학습방법을 통해 설명 문장 생성 과정과 응답 예측 과정간의 상호관계를 분석한다. 이러한 상호작용을 적극적으로 활용할 수 있는 보다 개선 시각적 질의응답 모델을 제안한다. 또한 학습한 결과를 바탕으로 설명 문장의 특성을 활용하여 시각적 질의응답 추론 과정을 개선함으로써 시각적 질의응답 모델의 발전 방향을 논의한다. 본 실험을 통해서 응답 예측에 적절한 설명 문장을 제시하는 해석 가능한 시각적 질의응답 모델을 제공한다.

  • PDF

Interpretable Deep Learning Based On Prototype Generation (프로토타입 생성 기반 딥 러닝 모델 설명 방법)

  • Park, Jae-hun;Kim, Kwang-su
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.23-26
    • /
    • 2022
  • 딥 러닝 모델은 블랙 박스 (Black Box) 모델로 예측에 대한 근거를 제시하지 못해 신뢰성이 떨어지는 단점이 존재한다. 이를 해결하기 위해 딥 러닝 모델에 설명력을 부여하는 설명 가능한 인공지능 (XAI) 분야 연구가 활발하게 이루어지고 있다. 본 논문에서는 모델 예측을 프로토타입을 통해 설명하는 딥 러닝 모델을 제시한다. 즉, "주어진 이미지는 티셔츠인데, 그 이유는 티셔츠를 대표하는 모양의 프로토타입과 닮았기 때문이다."의 형태로 딥 러닝 모델을 설명한다. 해당 모델은 Encoder, Prototype Layer, Classifier로 구성되어 있다. Encoder는 Feature를 추출하는 데 활용하고 Classifier를 통해 분류 작업을 수행한다. 모델이 제시하는 분류 결과를 설명하기 위해 Prototype Layer에서 가장 유사한 프로토타입을 찾아 설명을 제시한다. 실험 결과 프로토타입 생성 기반 설명 모델은 기존 이미지 분류 모델과 유사한 예측 정확도를 보였고, 예측에 대한 설명력까지 확보하였다.

  • PDF

Industrial Process Monitoring and Fault Diagnosis Based on Temporal Attention Augmented Deep Network

  • Mu, Ke;Luo, Lin;Wang, Qiao;Mao, Fushun
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.242-252
    • /
    • 2021
  • Following the intuition that the local information in time instances is hardly incorporated into the posterior sequence in long short-term memory (LSTM), this paper proposes an attention augmented mechanism for fault diagnosis of the complex chemical process data. Unlike conventional fault diagnosis and classification methods, an attention mechanism layer architecture is introduced to detect and focus on local temporal information. The augmented deep network results preserve each local instance's importance and contribution and allow the interpretable feature representation and classification simultaneously. The comprehensive comparative analyses demonstrate that the developed model has a high-quality fault classification rate of 95.49%, on average. The results are comparable to those obtained using various other techniques for the Tennessee Eastman benchmark process.

A review of Explainable AI Techniques in Medical Imaging (의료영상 분야를 위한 설명가능한 인공지능 기술 리뷰)

  • Lee, DongEon;Park, ChunSu;Kang, Jeong-Woon;Kim, MinWoo
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.4
    • /
    • pp.259-270
    • /
    • 2022
  • Artificial intelligence (AI) has been studied in various fields of medical imaging. Currently, top-notch deep learning (DL) techniques have led to high diagnostic accuracy and fast computation. However, they are rarely used in real clinical practices because of a lack of reliability concerning their results. Most DL models can achieve high performance by extracting features from large volumes of data. However, increasing model complexity and nonlinearity turn such models into black boxes that are seldom accessible, interpretable, and transparent. As a result, scientific interest in the field of explainable artificial intelligence (XAI) is gradually emerging. This study aims to review diverse XAI approaches currently exploited in medical imaging. We identify the concepts of the methods, introduce studies applying them to imaging modalities such as computational tomography (CT), magnetic resonance imaging (MRI), and endoscopy, and lastly discuss limitations and challenges faced by XAI for future studies.

Temporal Fusion Transformers and Deep Learning Methods for Multi-Horizon Time Series Forecasting (Temporal Fusion Transformers와 심층 학습 방법을 사용한 다층 수평 시계열 데이터 분석)

  • Kim, InKyung;Kim, DaeHee;Lee, Jaekoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.2
    • /
    • pp.81-86
    • /
    • 2022
  • Given that time series are used in various fields, such as finance, IoT, and manufacturing, data analytical methods for accurate time-series forecasting can serve to increase operational efficiency. Among time-series analysis methods, multi-horizon forecasting provides a better understanding of data because it can extract meaningful statistics and other characteristics of the entire time-series. Furthermore, time-series data with exogenous information can be accurately predicted by using multi-horizon forecasting methods. However, traditional deep learning-based models for time-series do not account for the heterogeneity of inputs. We proposed an improved time-series predicting method, called the temporal fusion transformer method, which combines multi-horizon forecasting with interpretable insights into temporal dynamics. Various real-world data such as stock prices, fine dust concentrates and electricity consumption were considered in experiments. Experimental results showed that our temporal fusion transformer method has better time-series forecasting performance than existing models.

Unleashing the Potential of Vision Transformer for Automated Bone Age Assessment in Hand X-rays (자동 뼈 연령 평가를 위한 비전 트랜스포머와 손 X 선 영상 분석)

  • Kyunghee Jung;Sammy Yap Xiang Bang;Nguyen Duc Toan;Hyunseung Choo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.687-688
    • /
    • 2023
  • Bone age assessment is a crucial task in pediatric radiology for assessing growth and development in children. In this paper, we explore the potential of Vision Transformer, a state-of-the-art deep learning model, for bone age assessment using X-ray images. We generate heatmap outputs using a pre-trained Vision Transformer model on a publicly available dataset of hand X-ray images and show that the model tends to focus on the overall hand and only the bone part of the image, indicating its potential for accurately identifying the regions of interest for bone age assessment without the need for pre-processing to remove background noise. We also suggest two methods for extracting the region of interest from the heatmap output. Our study suggests that Vision Transformer holds great potential for bone age assessment using X-ray images, as it can provide accurate and interpretable output that may assist radiologists in identifying potential abnormalities or areas of interest in the X-ray image.

An Interpretable Log Anomaly System Using Bayesian Probability and Closed Sequence Pattern Mining (베이지안 확률 및 폐쇄 순차패턴 마이닝 방식을 이용한 설명가능한 로그 이상탐지 시스템)

  • Yun, Jiyoung;Shin, Gun-Yoon;Kim, Dong-Wook;Kim, Sang-Soo;Han, Myung-Mook
    • Journal of Internet Computing and Services
    • /
    • v.22 no.2
    • /
    • pp.77-87
    • /
    • 2021
  • With the development of the Internet and personal computers, various and complex attacks begin to emerge. As the attacks become more complex, signature-based detection become difficult. It leads to the research on behavior-based log anomaly detection. Recent work utilizes deep learning to learn the order and it shows good performance. Despite its good performance, it does not provide any explanation for prediction. The lack of explanation can occur difficulty of finding contamination of data or the vulnerability of the model itself. As a result, the users lose their reliability of the model. To address this problem, this work proposes an explainable log anomaly detection system. In this study, log parsing is the first to proceed. Afterward, sequential rules are extracted by Bayesian posterior probability. As a result, the "If condition then results, post-probability" type rule set is extracted. If the sample is matched to the ruleset, it is normal, otherwise, it is an anomaly. We utilize HDFS datasets for the experiment, resulting in F1score 92.7% in test dataset.