• Title/Summary/Keyword: XAI

Search Result 87, Processing Time 0.034 seconds

Explainable Artificial Intelligence (XAI) Surrogate Models for Chemical Process Design and Analysis (화학 공정 설계 및 분석을 위한 설명 가능한 인공지능 대안 모델)

  • Yuna Ko;Jonggeol Na
    • Korean Chemical Engineering Research
    • /
    • v.61 no.4
    • /
    • pp.542-549
    • /
    • 2023
  • Since the growing interest in surrogate modeling, there has been continuous research aimed at simulating nonlinear chemical processes using data-driven machine learning. However, the opaque nature of machine learning models, which limits their interpretability, poses a challenge for their practical application in industry. Therefore, this study aims to analyze chemical processes using Explainable Artificial Intelligence (XAI), a concept that improves interpretability while ensuring model accuracy. While conventional sensitivity analysis of chemical processes has been limited to calculating and ranking the sensitivity indices of variables, we propose a methodology that utilizes XAI to not only perform global and local sensitivity analysis, but also examine the interactions among variables to gain physical insights from the data. For the ammonia synthesis process, which is the target process of the case study, we set the temperature of the preheater leading to the first reactor and the split ratio of the cold shot to the three reactors as process variables. By integrating Matlab and Aspen Plus, we obtained data on ammonia production and the maximum temperatures of the three reactors while systematically varying the process variables. We then trained tree-based models and performed sensitivity analysis using the SHAP technique, one of the XAI methods, on the most accurate model. The global sensitivity analysis showed that the preheater temperature had the greatest effect, and the local sensitivity analysis provided insights for defining the ranges of process variables to improve productivity and prevent overheating. By constructing alternative models for chemical processes and using XAI for sensitivity analysis, this work contributes to providing both quantitative and qualitative feedback for process optimization.

XAI Research Trends Using Social Network Analysis and Topic Modeling (소셜 네트워크 분석과 토픽 모델링을 활용한 설명 가능 인공지능 연구 동향 분석)

  • Gun-doo Moon;Kyoung-jae Kim
    • Journal of Information Technology Applications and Management
    • /
    • v.30 no.1
    • /
    • pp.53-70
    • /
    • 2023
  • Artificial intelligence has become familiar with modern society, not the distant future. As artificial intelligence and machine learning developed more highly and became more complicated, it became difficult for people to grasp its structure and the basis for decision-making. It is because machine learning only shows results, not the whole processes. As artificial intelligence developed and became more common, people wanted the explanation which could provide them the trust on artificial intelligence. This study recognized the necessity and importance of explainable artificial intelligence, XAI, and examined the trends of XAI research by analyzing social networks and analyzing topics with IEEE published from 2004, when the concept of artificial intelligence was defined, to 2022. Through social network analysis, the overall pattern of nodes can be found in a large number of documents and the connection between keywords shows the meaning of the relationship structure, and topic modeling can identify more objective topics by extracting keywords from unstructured data and setting topics. Both analysis methods are suitable for trend analysis. As a result of the analysis, it was found that XAI's application is gradually expanding in various fields as well as machine learning and deep learning.

An XAI approach based on Grad-CAM to analyze learning criteria for DCGANS (DCGAN의 학습 기준을 분석하기 위한 Grad-CAM 기반의 XAI 접근 방법)

  • Jin-Ju Ok
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.479-480
    • /
    • 2023
  • 생성형 인공지능은 학습의 기준을 파악하기 어려운 모델이다. 그 중 DCGAN을 분석하여 판별자를 통해 생성자의 학습 기준을 판단할 수 있는 하나의 방법을 제안하고자 한다. 그 과정에서 XAI 기법인 Grad-CAM을 활용하여 학습 시에 모델이 중요시하는 부분을 분석하여 적합한 학습과 학습에 적합하지 않은 데이터를 분석하는 방법을 소개하고자 한다.

XAI Technology Trends for AI Reliability (AI 신뢰성을 위한 XAI 기술 동향)

  • Sim, Hye-Jin;Choi, Chang-Woo;Kim, Ho-Won
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.405-407
    • /
    • 2022
  • 4차 산업 시대가 도래하며 인공지능이 비약적으로 발전함에 따라, 인공지능은 다양한 산업 분야에 도입되어 업무의 효율성을 높이고 인류 발전에 중요한 역할을 하고 있다. 그러나 사회 전반에 걸쳐 인공지능의 역할이 커질수록 인공지능의 오판단, 오작동으로 인한 문제 또한 크게 작용한다. 따라서 인공지능 모델의 판단, 행동에 대한 신뢰성을 확보하기 위해 XAI 기술의 중요성이 크게 대두되었다. 본 논문에서는 이러한 XAI 기술에 대한 동향을 조사, 분석한다.

Combining AutoML and XAI: Automating machine learning models and improving interpretability (AutoML 과 XAI 의 결합 : 기계학습 모델의 자동화와 해석력 향상을 위하여)

  • Min Hyeok Son;Nam Hun Kim;Hyeon Ji Lee;Do Yeon Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.924-925
    • /
    • 2023
  • 본 연구는 최근 기계학습 모델의 복잡성 증가와 '블랙 박스'로 인식된 머신러닝 모델의 해석 문제에 주목하였다. 이를 해결하기 위해, AutoML 기술을 사용하여 효율적으로 최적의 모델을 탐색하고, XAI 기법을 도입하여 모델의 예측 과정에 대한 투명성을 확보하려 하였다. XAI 기법을 도입한 방식은 전통적인 방법에 비해 뛰어난 해석력을 제공하며, 사용자가 머신러닝 모델의 예측 근거와 그 타당성을 명확히 이해할 수 있음을 확인하였다.

EC/CALS-related Projects in the Engineering Information Systems Lab

  • Fulton, Robert E.;Peak, Russell S.
    • Proceedings of the CALSEC Conference
    • /
    • 1999.07a
    • /
    • pp.147-164
    • /
    • 1999
  • ㆍ Strong emphasis on X-analysis integration (XAI/DAI) ㆍ Multi-Representation Architecture (MRA) - Addressing fundamental XAI/DAI issues - General methodology $\longrightarrow$ Flexibility & broad application(omitted)

  • PDF

A Personal Credit Rating Using Convolutional Neural Networks with Transformation of Credit Data to Imaged Data and eXplainable Artificial Intelligence(XAI) (신용 데이터의 이미지 변환을 활용한 합성곱 신경망과 설명 가능한 인공지능(XAI)을 이용한 개인신용평가)

  • Won, Jong Gwan;Hong, Tae Ho;Bae, Kyoung Il
    • The Journal of Information Systems
    • /
    • v.30 no.4
    • /
    • pp.203-226
    • /
    • 2021
  • Purpose The purpose of this study is to enhance the accuracy score of personal credit scoring using the convolutional neural networks and secure the transparency of the deep learning model using eXplainalbe Artifical Inteligence(XAI) technique. Design/methodology/approach This study built a classification model by using the convolutional neural networks(CNN) and applied a methodology that is transformation of numerical data to imaged data to apply CNN on personal credit data. Then layer-wise relevance propagation(LRP) was applied to model we constructed to find what variables are more influenced to the output value. Findings According to the empirical analysis result, this study confirmed that accuracy score by model using CNN is highest among other models using logistic regression, neural networks, and support vector machines. In addition, With the LRP that is one of the technique of XAI, variables that have a great influence on calculating the output value for each observation could be found.

A review of Explainable AI Techniques in Medical Imaging (의료영상 분야를 위한 설명가능한 인공지능 기술 리뷰)

  • Lee, DongEon;Park, ChunSu;Kang, Jeong-Woon;Kim, MinWoo
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.4
    • /
    • pp.259-270
    • /
    • 2022
  • Artificial intelligence (AI) has been studied in various fields of medical imaging. Currently, top-notch deep learning (DL) techniques have led to high diagnostic accuracy and fast computation. However, they are rarely used in real clinical practices because of a lack of reliability concerning their results. Most DL models can achieve high performance by extracting features from large volumes of data. However, increasing model complexity and nonlinearity turn such models into black boxes that are seldom accessible, interpretable, and transparent. As a result, scientific interest in the field of explainable artificial intelligence (XAI) is gradually emerging. This study aims to review diverse XAI approaches currently exploited in medical imaging. We identify the concepts of the methods, introduce studies applying them to imaging modalities such as computational tomography (CT), magnetic resonance imaging (MRI), and endoscopy, and lastly discuss limitations and challenges faced by XAI for future studies.

Injection Process Yield Improvement Methodology Based on eXplainable Artificial Intelligence (XAI) Algorithm (XAI(eXplainable Artificial Intelligence) 알고리즘 기반 사출 공정 수율 개선 방법론)

  • Ji-Soo Hong;Yong-Min Hong;Seung-Yong Oh;Tae-Ho Kang;Hyeon-Jeong Lee;Sung-Woo Kang
    • Journal of Korean Society for Quality Management
    • /
    • v.51 no.1
    • /
    • pp.55-65
    • /
    • 2023
  • Purpose: The purpose of this study is to propose an optimization process to improve product yield in the process using process data. Recently, research for low-cost and high-efficiency production in the manufacturing process using machine learning or deep learning has continued. Therefore, this study derives major variables that affect product defects in the manufacturing process using eXplainable Artificial Intelligence(XAI) method. After that, the optimal range of the variables is presented to propose a methodology for improving product yield. Methods: This study is conducted using the injection molding machine AI dataset released on the Korea AI Manufacturing Platform(KAMP) organized by KAIST. Using the XAI-based SHAP method, major variables affecting product defects are extracted from each process data. XGBoost and LightGBM were used as learning algorithms, 5-6 variables are extracted as the main process variables for the injection process. Subsequently, the optimal control range of each process variable is presented using the ICE method. Finally, the product yield improvement methodology of this study is proposed through a validation process using Test Data. Results: The results of this study are as follows. In the injection process data, it was confirmed that XGBoost had an improvement defect rate of 0.21% and LightGBM had an improvement defect rate of 0.29%, which were improved by 0.79%p and 0.71%p, respectively, compared to the existing defect rate of 1.00%. Conclusion: This study is a case study. A research methodology was proposed in the injection process, and it was confirmed that the product yield was improved through verification.

XAI based public facility safety evaluation system research (XAI 기반의 공공시설물 건전도 안전검사 평가시스템 연구)

  • Park, Yesul;Kyeong, Seonjae;Kim, Minjun;Oh, Chanmi;Lee, Jeasung;Lee, Jaehwan;Lee, Hyunseung;Lee, Cheolhee;Moon, Hyeonjoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.705-708
    • /
    • 2020
  • 공공시설에 대한 안전점검은 공공시설의 노후화에 따라 정기적인 검사의 필요성이 요구되고 있다. 기존의 안전점검 방식은 대부분 육안으로 점검하는 것에 의존하는데 이는 점검자의 숙련도에 따라 결과의 품질이 달라지게 된다. 본 논문에서는 XAI 기반의 공공시설물 건전도 안전검사 평가시스템을 제안하며, 이는 점검자의 숙련도와 무관하게 항상 같은 결과를 도출해 내며 XAI 를 통해 사용자에게 안전점검에 대한 결과를 제시해준다. 공공시설물 중 터널 시설물의 안전검사 평가시스템을 기반으로 하는 연구를 진행하였으며 이는 수정없이 교량 시설물 등 다른 공공시설물에 적용이 가능하다. 본 논문은 5 가지로 구분된다. 1) 터널 이미지와 균열에 마스크를 적용한 이미지 두 가지의 데이터 셋을 448x448 로 생성한다. 2) UNet 과 Resnet152 의 두 모델을 적용한 혼합 모델을 이용하여 생성한 데이터 셋을 훈련시킨다. 3) 훈련된 혼합 모델에서 생성된 분할 이미지에 대해 노이즈 제거 과정을 진행한다. 4) 노이즈 제거가 끝난 이미지에 스켈레톤화(Skeletonization)를 적용시켜 균열 이미지의 뼈대를 구한다. 뼈대 이미지 기반으로 균열의 길이, 두께, 위치등의 정보를 얻는다. 5) XAI 부분에서는 뼈대 이미지의 정보를 토대로 균열의 위치, 두께, 길이 등에 대해 계산을 진행한 후 사용자에게 제시해준다.

  • PDF