• Title/Summary/Keyword: Explainable

Search Result 152, Processing Time 0.021 seconds

Understanding Interactive and Explainable Feedback for Supporting Non-Experts with Data Preparation for Building a Deep Learning Model

  • Kim, Yeonji;Lee, Kyungyeon;Oh, Uran
    • International journal of advanced smart convergence
    • /
    • v.9 no.2
    • /
    • pp.90-104
    • /
    • 2020
  • It is difficult for non-experts to build machine learning (ML) models at the level that satisfies their needs. Deep learning models are even more challenging because it is unclear how to improve the model, and a trial-and-error approach is not feasible since training these models are time-consuming. To assist these novice users, we examined how interactive and explainable feedback while training a deep learning network can contribute to model performance and users' satisfaction, focusing on the data preparation process. We conducted a user study with 31 participants without expertise, where they were asked to improve the accuracy of a deep learning model, varying feedback conditions. While no significant performance gain was observed, we identified potential barriers during the process and found that interactive and explainable feedback provide complementary benefits for improving users' understanding of ML. We conclude with implications for designing an interface for building ML models for novice users.

Explainable Machine Learning Based a Packed Red Blood Cell Transfusion Prediction and Evaluation for Major Internal Medical Condition

  • Lee, Seongbin;Lee, Seunghee;Chang, Duhyeuk;Song, Mi-Hwa;Kim, Jong-Yeup;Lee, Suehyun
    • Journal of Information Processing Systems
    • /
    • v.18 no.3
    • /
    • pp.302-310
    • /
    • 2022
  • Efficient use of limited blood products is becoming very important in terms of socioeconomic status and patient recovery. To predict the appropriateness of patient-specific transfusions for the intensive care unit (ICU) patients who require real-time monitoring, we evaluated a model to predict the possibility of transfusion dynamically by using the Medical Information Mart for Intensive Care III (MIMIC-III), an ICU admission record at Harvard Medical School. In this study, we developed an explainable machine learning to predict the possibility of red blood cell transfusion for major medical diseases in the ICU. Target disease groups that received packed red blood cell transfusions at high frequency were selected and 16,222 patients were finally extracted. The prediction model achieved an area under the ROC curve of 0.9070 and an F1-score of 0.8166 (LightGBM). To explain the performance of the machine learning model, feature importance analysis and a partial dependence plot were used. The results of our study can be used as basic data for recommendations related to the adequacy of blood transfusions and are expected to ultimately contribute to the recovery of patients and prevention of excessive consumption of blood products.

A review of Explainable AI Techniques in Medical Imaging (의료영상 분야를 위한 설명가능한 인공지능 기술 리뷰)

  • Lee, DongEon;Park, ChunSu;Kang, Jeong-Woon;Kim, MinWoo
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.4
    • /
    • pp.259-270
    • /
    • 2022
  • Artificial intelligence (AI) has been studied in various fields of medical imaging. Currently, top-notch deep learning (DL) techniques have led to high diagnostic accuracy and fast computation. However, they are rarely used in real clinical practices because of a lack of reliability concerning their results. Most DL models can achieve high performance by extracting features from large volumes of data. However, increasing model complexity and nonlinearity turn such models into black boxes that are seldom accessible, interpretable, and transparent. As a result, scientific interest in the field of explainable artificial intelligence (XAI) is gradually emerging. This study aims to review diverse XAI approaches currently exploited in medical imaging. We identify the concepts of the methods, introduce studies applying them to imaging modalities such as computational tomography (CT), magnetic resonance imaging (MRI), and endoscopy, and lastly discuss limitations and challenges faced by XAI for future studies.

A Study on Human-AI Collaboration Process to Support Evidence-Based National Innovation Monitoring: Case Study on Ministry of Oceans and Fisheries (Human-AI 협력 프로세스 기반의 증거기반 국가혁신 모니터링 연구: 해양수산부 사례)

  • Jung Sun Lim;Seoung Hun Bae;Kil-Ho Ryu;Sang-Gook Kim
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.2
    • /
    • pp.22-31
    • /
    • 2023
  • Governments around the world are enacting laws mandating explainable traceability when using AI(Artificial Intelligence) to solve real-world problems. HAI(Human-Centric Artificial Intelligence) is an approach that induces human decision-making through Human-AI collaboration. This research presents a case study that implements the Human-AI collaboration to achieve explainable traceability in governmental data analysis. The Human-AI collaboration explored in this study performs AI inferences for generating labels, followed by AI interpretation to make results more explainable and traceable. The study utilized an example dataset from the Ministry of Oceans and Fisheries to reproduce the Human-AI collaboration process used in actual policy-making, in which the Ministry of Science and ICT utilized R&D PIE(R&D Platform for Investment and Evaluation) to build a government investment portfolio.

Study on predictive model and mechanism analysis for martensite transformation temperatures through explainable artificial intelligence (설명가능한 인공지능을 통한 마르텐사이트 변태 온도 예측 모델 및 거동 분석 연구)

  • Junhyub Jeon;Seung Bae Son;Jae-Gil Jung;Seok-Jae Lee
    • Journal of the Korean Society for Heat Treatment
    • /
    • v.37 no.3
    • /
    • pp.103-113
    • /
    • 2024
  • Martensite volume fraction significantly affects the mechanical properties of alloy steels. Martensite start temperature (Ms), transformation temperature for martensite 50 vol.% (M50), and transformation temperature for martensite 90 vol.% (M90) are important transformation temperatures to control the martensite phase fraction. Several researchers proposed empirical equations and machine learning models to predict the Ms temperature. These numerical approaches can easily predict the Ms temperature without additional experiment and cost. However, to control martensite phase fraction more precisely, we need to reduce prediction error of the Ms model and propose prediction models for other martensite transformation temperatures (M50, M90). In the present study, machine learning model was applied to suggest the predictive model for the Ms, M50, M90 temperatures. To explain prediction mechanisms and suggest feature importance on martensite transformation temperature of machine learning models, the explainable artificial intelligence (XAI) is employed. Random forest regression (RFR) showed the best performance for predicting the Ms, M50, M90 temperatures using different machine learning models. The feature importance was proposed and the prediction mechanisms were discussed by XAI.

A Comparative Analysis of Ensemble Learning-Based Classification Models for Explainable Term Deposit Subscription Forecasting (설명 가능한 정기예금 가입 여부 예측을 위한 앙상블 학습 기반 분류 모델들의 비교 분석)

  • Shin, Zian;Moon, Jihoon;Rho, Seungmin
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.3
    • /
    • pp.97-117
    • /
    • 2021
  • Predicting term deposit subscriptions is one of representative financial marketing in banks, and banks can build a prediction model using various customer information. In order to improve the classification accuracy for term deposit subscriptions, many studies have been conducted based on machine learning techniques. However, even if these models can achieve satisfactory performance, utilizing them is not an easy task in the industry when their decision-making process is not adequately explained. To address this issue, this paper proposes an explainable scheme for term deposit subscription forecasting. For this, we first construct several classification models using decision tree-based ensemble learning methods, which yield excellent performance in tabular data, such as random forest, gradient boosting machine (GBM), extreme gradient boosting (XGB), and light gradient boosting machine (LightGBM). We then analyze their classification performance in depth through 10-fold cross-validation. After that, we provide the rationale for interpreting the influence of customer information and the decision-making process by applying Shapley additive explanation (SHAP), an explainable artificial intelligence technique, to the best classification model. To verify the practicality and validity of our scheme, experiments were conducted with the bank marketing dataset provided by Kaggle; we applied the SHAP to the GBM and LightGBM models, respectively, according to different dataset configurations and then performed their analysis and visualization for explainable term deposit subscriptions.

A machine learning framework for performance anomaly detection

  • Hasnain, Muhammad;Pasha, Muhammad Fermi;Ghani, Imran;Jeong, Seung Ryul;Ali, Aitizaz
    • Journal of Internet Computing and Services
    • /
    • v.23 no.2
    • /
    • pp.97-105
    • /
    • 2022
  • Web services show a rapid evolution and integration to meet the increased users' requirements. Thus, web services undergo updates and may have performance degradation due to undetected faults in the updated versions. Due to these faults, many performances and regression anomalies in web services may occur in real-world scenarios. This paper proposed applying the deep learning model and innovative explainable framework to detect performance and regression anomalies in web services. This study indicated that upper bound and lower bound values in performance metrics provide us with the simple means to detect the performance and regression anomalies in updated versions of web services. The explainable deep learning method enabled us to decide the precise use of deep learning to detect performance and anomalies in web services. The evaluation results of the proposed approach showed us the detection of unusual behavior of web service. The proposed approach is efficient and straightforward in detecting regression anomalies in web services compared with the existing approaches.

A Study on Explainable Artificial Intelligence-based Sentimental Analysis System Model

  • Song, Mi-Hwa
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.142-151
    • /
    • 2022
  • In this paper, a model combined with explanatory artificial intelligence (xAI) models was presented to secure the reliability of machine learning-based sentiment analysis and prediction. The applicability of the proposed model was tested and described using the IMDB dataset. This approach has an advantage in that it can explain how the data affects the prediction results of the model from various perspectives. In various applications of sentiment analysis such as recommendation system, emotion analysis through facial expression recognition, and opinion analysis, it is possible to gain trust from users of the system by presenting more specific and evidence-based analysis results to users.

Bankruptcy Prediction with Explainable Artificial Intelligence for Early-Stage Business Models

  • Tuguldur Enkhtuya;Dae-Ki Kang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.3
    • /
    • pp.58-65
    • /
    • 2023
  • Bankruptcy is a significant risk for start-up companies, but with the help of cutting-edge artificial intelligence technology, we can now predict bankruptcy with detailed explanations. In this paper, we implemented the Category Boosting algorithm following data cleaning and editing using OpenRefine. We further explained our model using the Shapash library, incorporating domain knowledge. By leveraging the 5C's credit domain knowledge, financial analysts in banks or investors can utilize the detailed results provided by our model to enhance their decision-making processes, even without extensive knowledge about AI. This empowers investors to identify potential bankruptcy risks in their business models, enabling them to make necessary improvements or reconsider their ventures before proceeding. As a result, our model serves as a "glass-box" model, allowing end-users to understand which specific financial indicators contribute to the prediction of bankruptcy. This transparency enhances trust and provides valuable insights for decision-makers in mitigating bankruptcy risks.

A Gradient-Based Explanation Method for Node Classification Using Graph Convolutional Networks

  • Chaehyeon Kim;Hyewon Ryu;Ki Yong Lee
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.803-816
    • /
    • 2023
  • Explainable artificial intelligence is a method that explains how a complex model (e.g., a deep neural network) yields its output from a given input. Recently, graph-type data have been widely used in various fields, and diverse graph neural networks (GNNs) have been developed for graph-type data. However, methods to explain the behavior of GNNs have not been studied much, and only a limited understanding of GNNs is currently available. Therefore, in this paper, we propose an explanation method for node classification using graph convolutional networks (GCNs), which is a representative type of GNN. The proposed method finds out which features of each node have the greatest influence on the classification of that node using GCN. The proposed method identifies influential features by backtracking the layers of the GCN from the output layer to the input layer using the gradients. The experimental results on both synthetic and real datasets demonstrate that the proposed explanation method accurately identifies the features of each node that have the greatest influence on its classification.