• Title/Summary/Keyword: explainable artificial intelligence

Search Result 64, Processing Time 0.025 seconds

Classification of Whole Body Bone Scan Image with Bone Metastasis using CNN-based Transfer Learning (CNN 기반 전이학습을 이용한 뼈 전이가 존재하는 뼈 스캔 영상 분류)

  • Yim, Ji Yeong;Do, Thanh Cong;Kim, Soo Hyung;Lee, Guee Sang;Lee, Min Hee;Min, Jung Joon;Bom, Hee Seung;Kim, Hyeon Sik;Kang, Sae Ryung;Yang, Hyung Jeong
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.8
    • /
    • pp.1224-1232
    • /
    • 2022
  • Whole body bone scan is the most frequently performed nuclear medicine imaging to evaluate bone metastasis in cancer patients. We evaluated the performance of a VGG16-based transfer learning classifier for bone scan images in which metastatic bone lesion was present. A total of 1,000 bone scans in 1,000 cancer patients (500 patients with bone metastasis, 500 patients without bone metastasis) were evaluated. Bone scans were labeled with abnormal/normal for bone metastasis using medical reports and image review. Subsequently, gradient-weighted class activation maps (Grad-CAMs) were generated for explainable AI. The proposed model showed AUROC 0.96 and F1-Score 0.90, indicating that it outperforms to VGG16, ResNet50, Xception, DenseNet121 and InceptionV3. Grad-CAM visualized that the proposed model focuses on hot uptakes, which are indicating active bone lesions, for classification of whole body bone scan images with bone metastases.

Explainable Credit Default Prediction Using SHAP (SHAP을 이용한 설명 가능한 신용카드 연체 예측)

  • Minjoong Kim;Seungwoo Kim;Jihoon Moon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.39-40
    • /
    • 2024
  • 본 연구는 SHAP(SHapley Additive exPlanations)을 활용하여 신용카드 사용자의 연체 가능성을 예측하는 기계학습 모델의 해석 가능성을 강화하는 방법을 제안한다. 대규모 신용카드 데이터를 분석하여, 고객의 나이, 성별, 결혼 상태, 결제 이력 등이 연체 발생에 미치는 영향을 명확히 하는 것을 목표로 한다. 본 연구를 토대로 금융기관은 더 정확한 위험 관리를 수행하고, 고객에게 맞춤형 서비스를 제공할 수 있는 기반을 마련할 수 있다.

  • PDF

Study on Heat Energy Consumption Forecast and Efficiency Mediated Explainable Artificial Intelligence (XAI) (설명 가능한 인공지능 매개 에너지 수요 예측 및 효율성 연구)

  • Shin, Jihye;Kim, Yunjae;Lee, Sujin;Moon, Hyeonjoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.1218-1221
    • /
    • 2022
  • 최근 전세계의 탄소중립 요구에 따른 에너지 효율 증대를 통한 에너지 절감을 위한 효율성 관련 연구가 확대되고 있다. 방송과 미디어 분야에는 에너지 효율이 더욱 시급하다. 이에 본 연구에서는 효율적인 에너지 시스템 구축을 위해 난방 에너지 시계열 데이터를 기반으로 한 수요 예측 모델을 선정하고, 설명하는 인공지능 모델을 도입하여 수요 예측에 영향을 미치는 원인을 파악하는 프레임워크를 제안한다.

  • PDF

Explainable Photovoltaic Power Forecasting Scheme Using BiLSTM (BiLSTM 기반의 설명 가능한 태양광 발전량 예측 기법)

  • Park, Sungwoo;Jung, Seungmin;Moon, Jaeuk;Hwang, Eenjun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.8
    • /
    • pp.339-346
    • /
    • 2022
  • Recently, the resource depletion and climate change problem caused by the massive usage of fossil fuels for electric power generation has become a critical issue worldwide. According to this issue, interest in renewable energy resources that can replace fossil fuels is increasing. Especially, photovoltaic power has gaining much attention because there is no risk of resource exhaustion compared to other energy resources and there are low restrictions on installation of photovoltaic system. In order to use the power generated by the photovoltaic system efficiently, a more accurate photovoltaic power forecasting model is required. So far, even though many machine learning and deep learning-based photovoltaic power forecasting models have been proposed, they showed limited success in terms of interpretability. Deep learning-based forecasting models have the disadvantage of being difficult to explain how the forecasting results are derived. To solve this problem, many studies are being conducted on explainable artificial intelligence technique. The reliability of the model can be secured if it is possible to interpret how the model derives the results. Also, the model can be improved to increase the forecasting accuracy based on the analysis results. Therefore, in this paper, we propose an explainable photovoltaic power forecasting scheme based on BiLSTM (Bidirectional Long Short-Term Memory) and SHAP (SHapley Additive exPlanations).

Autonomous Factory: Future Shape Realized by Manufacturing + AI (제조+AI로 실현되는 미래상: 자율공장)

  • Son, J.Y.;Kim, H.;Lee, E.S.;Park, J.H.
    • Electronics and Telecommunications Trends
    • /
    • v.36 no.1
    • /
    • pp.64-70
    • /
    • 2021
  • The future society will be changed through an artificial intelligence (AI) based intelligent revolution. To prepare for the future and strengthen industrial competitiveness, countries around the world are implementing various policies and strategies to utilize AI in the manufacturing industry, which is the basis of the national economy. Manufacturing AI technology should ensure accuracy and reliability in industry and should be explainable, unlike general-purpose AI that targets human intelligence. This paper presents the future shape of the "autonomous factory" through the convergence of manufacturing and AI. In addition, it examines technological issues and research status to realize the autonomous factory during the stages of recognition, planning, execution, and control of manufacturing work.

Fault diagnosis of linear transfer robot using XAI

  • Taekyung Kim;Arum Park
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.121-138
    • /
    • 2024
  • Artificial intelligence is crucial to manufacturing productivity. Understanding the difficulties in producing disruptions, especially in linear feed robot systems, is essential for efficient operations. These mechanical tools, essential for linear movements within systems, are prone to damage and degradation, especially in the LM guide, due to repetitive motions. We examine how explainable artificial intelligence (XAI) may diagnose wafer linear robot linear rail clearance and ball screw clearance anomalies. XAI helps diagnose problems and explain anomalies, enriching management and operational strategies. By interpreting the reasons for anomaly detection through visualizations such as Class Activation Maps (CAMs) using technologies like Grad-CAM, FG-CAM, and FFT-CAM, and comparing 1D-CNN with 2D-CNN, we illustrates the potential of XAI in enhancing diagnostic accuracy. The use of datasets from accelerometer and torque sensors in our experiments validates the high accuracy of the proposed method in binary and ternary classifications. This study exemplifies how XAI can elucidate deep learning models trained on industrial signals, offering a practical approach to understanding and applying AI in maintaining the integrity of critical components such as LM guides in linear feed robots.

Evaluation of Data-based Expansion Joint-gap for Digital Maintenance (디지털 유지관리를 위한 데이터 기반 교량 신축이음 유간 평가 )

  • Jongho Park;Yooseong Shin
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.28 no.2
    • /
    • pp.1-8
    • /
    • 2024
  • The expansion joint is installed to offset the expansion of the superstructure and must ensure sufficient gap during its service life. In detailed guideline of safety inspection and precise safety diagnosis for bridge, damage due to lack or excessive gap is specified, but there are insufficient standards for determining the abnormal behavior of superstructures. In this study, a data-based maintenance was proposed by continuously monitoring the expansion-gap data of the same expansion joint. A total of 2,756 data were collected from 689 expansion joint, taking into account the effects of season. We have developed a method to evaluate changes in the expansion joint-gap that can analyze the thermal movement through four or more data at the same location, and classified the factors that affect the superstructure behavior and analyze the influence of each factor through deep learning and explainable artificial intelligence(AI). Abnormal behavior of the superstructure was classified into narrowing and functional failure through the expansion joint-gap evaluation graph. The influence factor analysis using deep learning and explainable AI is considered to be reliable because the results can be explained by the existing expansion gap calculation formula and bridge design.

A Study on Classification Models for Predicting Bankruptcy Based on XAI (XAI 기반 기업부도예측 분류모델 연구)

  • Jihong Kim;Nammee Moon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.333-340
    • /
    • 2023
  • Efficient prediction of corporate bankruptcy is an important part of making appropriate lending decisions for financial institutions and reducing loan default rates. In many studies, classification models using artificial intelligence technology have been used. In the financial industry, even if the performance of the new predictive models is excellent, it should be accompanied by an intuitive explanation of the basis on which the result was determined. Recently, the US, EU, and South Korea have commonly presented the right to request explanations of algorithms, so transparency in the use of AI in the financial sector must be secured. In this paper, an artificial intelligence-based interpretable classification prediction model was proposed using corporate bankruptcy data that was open to the outside world. First, data preprocessing, 5-fold cross-validation, etc. were performed, and classification performance was compared through optimization of 10 supervised learning classification models such as logistic regression, SVM, XGBoost, and LightGBM. As a result, LightGBM was confirmed as the best performance model, and SHAP, an explainable artificial intelligence technique, was applied to provide a post-explanation of the bankruptcy prediction process.

Overcoming the Challenges in the Development and Implementation of Artificial Intelligence in Radiology: A Comprehensive Review of Solutions Beyond Supervised Learning

  • Gil-Sun Hong;Miso Jang;Sunggu Kyung;Kyungjin Cho;Jiheon Jeong;Grace Yoojin Lee;Keewon Shin;Ki Duk Kim;Seung Min Ryu;Joon Beom Seo;Sang Min Lee;Namkug Kim
    • Korean Journal of Radiology
    • /
    • v.24 no.11
    • /
    • pp.1061-1080
    • /
    • 2023
  • Artificial intelligence (AI) in radiology is a rapidly developing field with several prospective clinical studies demonstrating its benefits in clinical practice. In 2022, the Korean Society of Radiology held a forum to discuss the challenges and drawbacks in AI development and implementation. Various barriers hinder the successful application and widespread adoption of AI in radiology, such as limited annotated data, data privacy and security, data heterogeneity, imbalanced data, model interpretability, overfitting, and integration with clinical workflows. In this review, some of the various possible solutions to these challenges are presented and discussed; these include training with longitudinal and multimodal datasets, dense training with multitask learning and multimodal learning, self-supervised contrastive learning, various image modifications and syntheses using generative models, explainable AI, causal learning, federated learning with large data models, and digital twins.

Performance improvement of artificial neural network based water quality prediction model using explainable artificial intelligence technology (설명가능한 인공지능 기술을 이용한 인공신경망 기반 수질예측 모델의 성능향상)

  • Lee, Won Jin;Lee, Eui Hoon
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.11
    • /
    • pp.801-813
    • /
    • 2023
  • Recently, as studies about Artificial Neural Network (ANN) are actively progressing, studies for predicting water quality of rivers using ANN are being conducted. However, it is difficult to analyze the operation process inside ANN, because ANN is form of Black-box. Although eXplainable Artificial Intelligence (XAI) is used to analyze the computational process of ANN, research using XAI technology in the field of water resources is insufficient. This study analyzed Multi Layer Perceptron (MLP) to predict Water Temperature (WT), Dissolved Oxygen (DO), hydrogen ion concentration (pH) and Chlorophyll-a (Chl-a) at the Dasan water quality observatory in the Nakdong river using Layer-wise Relevance Propagation (LRP) among XAI technologies. The MLP that learned water quality was analyzed using LRP to select the optimal input data to predict water quality, and the prediction results of the MLP learned using the optimal input data were analyzed. As a result of selecting the optimal input data using LRP, the prediction accuracy of MLP, which learned the input data except daily precipitation in the surrounding area, was the highest. Looking at the analysis of MLP's DO prediction results, it was analyzed that the pH and DO a had large influence at the highest point, and the effect of WT was large at the lowest point.