• 제목/요약/키워드: Explainable

검색결과 154건 처리시간 0.028초

Damage Detection and Damage Quantification of Temporary works Equipment based on Explainable Artificial Intelligence (XAI)

  • Cheolhee Lee;Taehoe Koo;Namwook Park;Nakhoon Lim
    • 인터넷정보학회논문지
    • /
    • 제25권2호
    • /
    • pp.11-19
    • /
    • 2024
  • This paper was studied abouta technology for detecting damage to temporary works equipment used in construction sites with explainable artificial intelligence (XAI). Temporary works equipment is mostly composed of steel or aluminum, and it is reused several times due to the characters of the materials in temporary works equipment. However, it sometimes causes accidents at construction sites by using low or decreased quality of temporary works equipment because the regulation and restriction of reuse in them is not strict. Currently, safety rules such as related government laws, standards, and regulations for quality control of temporary works equipment have not been established. Additionally, the inspection results were often different according to the inspector's level of training. To overcome these limitations, a method based with AI and image processing technology was developed. In addition, it was devised by applying explainableartificial intelligence (XAI) technology so that the inspector makes more exact decision with resultsin damage detect with image analysis by the XAI which is a developed AI model for analysis of temporary works equipment. In the experiments, temporary works equipment was photographed with a 4k-quality camera, and the learned artificial intelligence model was trained with 610 labelingdata, and the accuracy was tested by analyzing the image recording data of temporary works equipment. As a result, the accuracy of damage detect by the XAI was 95.0% for the training dataset, 92.0% for the validation dataset, and 90.0% for the test dataset. This was shown aboutthe reliability of the performance of the developed artificial intelligence. It was verified for usability of explainable artificial intelligence to detect damage in temporary works equipment by the experiments. However, to improve the level of commercial software, the XAI need to be trained more by real data set and the ability to detect damage has to be kept or increased when the real data set is applied.

Understanding Interactive and Explainable Feedback for Supporting Non-Experts with Data Preparation for Building a Deep Learning Model

  • Kim, Yeonji;Lee, Kyungyeon;Oh, Uran
    • International journal of advanced smart convergence
    • /
    • 제9권2호
    • /
    • pp.90-104
    • /
    • 2020
  • It is difficult for non-experts to build machine learning (ML) models at the level that satisfies their needs. Deep learning models are even more challenging because it is unclear how to improve the model, and a trial-and-error approach is not feasible since training these models are time-consuming. To assist these novice users, we examined how interactive and explainable feedback while training a deep learning network can contribute to model performance and users' satisfaction, focusing on the data preparation process. We conducted a user study with 31 participants without expertise, where they were asked to improve the accuracy of a deep learning model, varying feedback conditions. While no significant performance gain was observed, we identified potential barriers during the process and found that interactive and explainable feedback provide complementary benefits for improving users' understanding of ML. We conclude with implications for designing an interface for building ML models for novice users.

Explainable Machine Learning Based a Packed Red Blood Cell Transfusion Prediction and Evaluation for Major Internal Medical Condition

  • Lee, Seongbin;Lee, Seunghee;Chang, Duhyeuk;Song, Mi-Hwa;Kim, Jong-Yeup;Lee, Suehyun
    • Journal of Information Processing Systems
    • /
    • 제18권3호
    • /
    • pp.302-310
    • /
    • 2022
  • Efficient use of limited blood products is becoming very important in terms of socioeconomic status and patient recovery. To predict the appropriateness of patient-specific transfusions for the intensive care unit (ICU) patients who require real-time monitoring, we evaluated a model to predict the possibility of transfusion dynamically by using the Medical Information Mart for Intensive Care III (MIMIC-III), an ICU admission record at Harvard Medical School. In this study, we developed an explainable machine learning to predict the possibility of red blood cell transfusion for major medical diseases in the ICU. Target disease groups that received packed red blood cell transfusions at high frequency were selected and 16,222 patients were finally extracted. The prediction model achieved an area under the ROC curve of 0.9070 and an F1-score of 0.8166 (LightGBM). To explain the performance of the machine learning model, feature importance analysis and a partial dependence plot were used. The results of our study can be used as basic data for recommendations related to the adequacy of blood transfusions and are expected to ultimately contribute to the recovery of patients and prevention of excessive consumption of blood products.

의료영상 분야를 위한 설명가능한 인공지능 기술 리뷰 (A review of Explainable AI Techniques in Medical Imaging)

  • 이동언;박춘수;강정운;김민우
    • 대한의용생체공학회:의공학회지
    • /
    • 제43권4호
    • /
    • pp.259-270
    • /
    • 2022
  • Artificial intelligence (AI) has been studied in various fields of medical imaging. Currently, top-notch deep learning (DL) techniques have led to high diagnostic accuracy and fast computation. However, they are rarely used in real clinical practices because of a lack of reliability concerning their results. Most DL models can achieve high performance by extracting features from large volumes of data. However, increasing model complexity and nonlinearity turn such models into black boxes that are seldom accessible, interpretable, and transparent. As a result, scientific interest in the field of explainable artificial intelligence (XAI) is gradually emerging. This study aims to review diverse XAI approaches currently exploited in medical imaging. We identify the concepts of the methods, introduce studies applying them to imaging modalities such as computational tomography (CT), magnetic resonance imaging (MRI), and endoscopy, and lastly discuss limitations and challenges faced by XAI for future studies.

Human-AI 협력 프로세스 기반의 증거기반 국가혁신 모니터링 연구: 해양수산부 사례 (A Study on Human-AI Collaboration Process to Support Evidence-Based National Innovation Monitoring: Case Study on Ministry of Oceans and Fisheries)

  • 임정선;배성훈;류길호;김상국
    • 산업경영시스템학회지
    • /
    • 제46권2호
    • /
    • pp.22-31
    • /
    • 2023
  • Governments around the world are enacting laws mandating explainable traceability when using AI(Artificial Intelligence) to solve real-world problems. HAI(Human-Centric Artificial Intelligence) is an approach that induces human decision-making through Human-AI collaboration. This research presents a case study that implements the Human-AI collaboration to achieve explainable traceability in governmental data analysis. The Human-AI collaboration explored in this study performs AI inferences for generating labels, followed by AI interpretation to make results more explainable and traceable. The study utilized an example dataset from the Ministry of Oceans and Fisheries to reproduce the Human-AI collaboration process used in actual policy-making, in which the Ministry of Science and ICT utilized R&D PIE(R&D Platform for Investment and Evaluation) to build a government investment portfolio.

설명가능한 인공지능을 통한 마르텐사이트 변태 온도 예측 모델 및 거동 분석 연구 (Study on predictive model and mechanism analysis for martensite transformation temperatures through explainable artificial intelligence)

  • 전준협;손승배;정재길;이석재
    • 열처리공학회지
    • /
    • 제37권3호
    • /
    • pp.103-113
    • /
    • 2024
  • Martensite volume fraction significantly affects the mechanical properties of alloy steels. Martensite start temperature (Ms), transformation temperature for martensite 50 vol.% (M50), and transformation temperature for martensite 90 vol.% (M90) are important transformation temperatures to control the martensite phase fraction. Several researchers proposed empirical equations and machine learning models to predict the Ms temperature. These numerical approaches can easily predict the Ms temperature without additional experiment and cost. However, to control martensite phase fraction more precisely, we need to reduce prediction error of the Ms model and propose prediction models for other martensite transformation temperatures (M50, M90). In the present study, machine learning model was applied to suggest the predictive model for the Ms, M50, M90 temperatures. To explain prediction mechanisms and suggest feature importance on martensite transformation temperature of machine learning models, the explainable artificial intelligence (XAI) is employed. Random forest regression (RFR) showed the best performance for predicting the Ms, M50, M90 temperatures using different machine learning models. The feature importance was proposed and the prediction mechanisms were discussed by XAI.

설명 가능한 정기예금 가입 여부 예측을 위한 앙상블 학습 기반 분류 모델들의 비교 분석 (A Comparative Analysis of Ensemble Learning-Based Classification Models for Explainable Term Deposit Subscription Forecasting)

  • 신지안;문지훈;노승민
    • 한국전자거래학회지
    • /
    • 제26권3호
    • /
    • pp.97-117
    • /
    • 2021
  • 정기예금 가입 여부 예측은 은행의 대표적인 금융 마케팅 중 하나로, 은행은 다양한 고객 정보를 활용하여 예측 모델을 구성할 수 있다. 정기예금 가입 여부의 분류 정확도를 향상하기 위해, 많은 연구에서 기계학습 기법들을 이용하여 분류 모델들을 개발하였다. 하지만, 이러한 모델들이 만족스러운 성능을 보일지라도 모델의 의사결정 과정에 대한 근거가 적절하게 설명되지 않는다면 산업에서 활용하기가 쉽지 않다. 이러한 문제점을 해결하기 위해, 본 논문은 설명 가능한 정기예금 가입 여부 예측 기법을 제안한다. 먼저, 테이블 형식에서 우수한 성능을 도출하는 의사결정 나무 기반 앙상블 학습 기법인 랜덤 포레스트, GBM, XGBoost, LightGBM을 이용하여 분류 모델들을 개발하고, 10겹 교차검증을 통해 모델들의 분류 성능을 심층 분석한다. 다음으로, 가장 우수한 성능을 도출하는 모델에 설명 가능한 인공지능 기법인 SHAP을 적용하여 고객 정보의 영향도와 의사결정 과정 등을 해석할 수 있는 근거를 제공한다. 제안한 기법의 실용성과 타당성을 입증하기 위해, Kaggle에서 제공한 은행 마케팅 데이터 셋을 대상으로 모의실험을 진행하였으며, 데이터 셋 구성에 따라 GBM과 LightGBM 모델에 SHAP을 각기 적용하여 설명 가능한 정기예금 가입 여부를 위한 분석 및 시각화를 수행하였다.

A machine learning framework for performance anomaly detection

  • Hasnain, Muhammad;Pasha, Muhammad Fermi;Ghani, Imran;Jeong, Seung Ryul;Ali, Aitizaz
    • 인터넷정보학회논문지
    • /
    • 제23권2호
    • /
    • pp.97-105
    • /
    • 2022
  • Web services show a rapid evolution and integration to meet the increased users' requirements. Thus, web services undergo updates and may have performance degradation due to undetected faults in the updated versions. Due to these faults, many performances and regression anomalies in web services may occur in real-world scenarios. This paper proposed applying the deep learning model and innovative explainable framework to detect performance and regression anomalies in web services. This study indicated that upper bound and lower bound values in performance metrics provide us with the simple means to detect the performance and regression anomalies in updated versions of web services. The explainable deep learning method enabled us to decide the precise use of deep learning to detect performance and anomalies in web services. The evaluation results of the proposed approach showed us the detection of unusual behavior of web service. The proposed approach is efficient and straightforward in detecting regression anomalies in web services compared with the existing approaches.

A Study on Explainable Artificial Intelligence-based Sentimental Analysis System Model

  • Song, Mi-Hwa
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제14권1호
    • /
    • pp.142-151
    • /
    • 2022
  • In this paper, a model combined with explanatory artificial intelligence (xAI) models was presented to secure the reliability of machine learning-based sentiment analysis and prediction. The applicability of the proposed model was tested and described using the IMDB dataset. This approach has an advantage in that it can explain how the data affects the prediction results of the model from various perspectives. In various applications of sentiment analysis such as recommendation system, emotion analysis through facial expression recognition, and opinion analysis, it is possible to gain trust from users of the system by presenting more specific and evidence-based analysis results to users.

Bankruptcy Prediction with Explainable Artificial Intelligence for Early-Stage Business Models

  • Tuguldur Enkhtuya;Dae-Ki Kang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제15권3호
    • /
    • pp.58-65
    • /
    • 2023
  • Bankruptcy is a significant risk for start-up companies, but with the help of cutting-edge artificial intelligence technology, we can now predict bankruptcy with detailed explanations. In this paper, we implemented the Category Boosting algorithm following data cleaning and editing using OpenRefine. We further explained our model using the Shapash library, incorporating domain knowledge. By leveraging the 5C's credit domain knowledge, financial analysts in banks or investors can utilize the detailed results provided by our model to enhance their decision-making processes, even without extensive knowledge about AI. This empowers investors to identify potential bankruptcy risks in their business models, enabling them to make necessary improvements or reconsider their ventures before proceeding. As a result, our model serves as a "glass-box" model, allowing end-users to understand which specific financial indicators contribute to the prediction of bankruptcy. This transparency enhances trust and provides valuable insights for decision-makers in mitigating bankruptcy risks.