• Title/Summary/Keyword: XAI

Search Result 87, Processing Time 0.041 seconds

Effect Analysis of Data Imbalance for Emotion Recognition Based on Deep Learning (딥러닝기반 감정인식에서 데이터 불균형이 미치는 영향 분석)

  • Hajin Noh;Yujin Lim
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.8
    • /
    • pp.235-242
    • /
    • 2023
  • In recent years, as online counseling for infants and adolescents has increased, CNN-based deep learning models are widely used as assistance tools for emotion recognition. However, since most emotion recognition models are trained on mainly adult data, there are performance restrictions to apply the model to infants and adolescents. In this paper, in order to analyze the performance constraints, the characteristics of facial expressions for emotional recognition of infants and adolescents compared to adults are analyzed through LIME method, one of the XAI techniques. In addition, the experiments are performed on the male and female groups to analyze the characteristics of gender-specific facial expressions. As a result, we describe age-specific and gender-specific experimental results based on the data distribution of the pre-training dataset of CNN models and highlight the importance of balanced learning data.

Analysis of the impact of mathematics education research using explainable AI (설명가능한 인공지능을 활용한 수학교육 연구의 영향력 분석)

  • Oh, Se Jun
    • The Mathematical Education
    • /
    • v.62 no.3
    • /
    • pp.435-455
    • /
    • 2023
  • This study primarily focused on the development of an Explainable Artificial Intelligence (XAI) model to discern and analyze papers with significant impact in the field of mathematics education. To achieve this, meta-information from 29 domestic and international mathematics education journals was utilized to construct a comprehensive academic research network in mathematics education. This academic network was built by integrating five sub-networks: 'paper and its citation network', 'paper and author network', 'paper and journal network', 'co-authorship network', and 'author and affiliation network'. The Random Forest machine learning model was employed to evaluate the impact of individual papers within the mathematics education research network. The SHAP, an XAI model, was used to analyze the reasons behind the AI's assessment of impactful papers. Key features identified for determining impactful papers in the field of mathematics education through the XAI included 'paper network PageRank', 'changes in citations per paper', 'total citations', 'changes in the author's h-index', and 'citations per paper of the journal'. It became evident that papers, authors, and journals play significant roles when evaluating individual papers. When analyzing and comparing domestic and international mathematics education research, variations in these discernment patterns were observed. Notably, the significance of 'co-authorship network PageRank' was emphasized in domestic mathematics education research. The XAI model proposed in this study serves as a tool for determining the impact of papers using AI, providing researchers with strategic direction when writing papers. For instance, expanding the paper network, presenting at academic conferences, and activating the author network through co-authorship were identified as major elements enhancing the impact of a paper. Based on these findings, researchers can have a clear understanding of how their work is perceived and evaluated in academia and identify the key factors influencing these evaluations. This study offers a novel approach to evaluating the impact of mathematics education papers using an explainable AI model, traditionally a process that consumed significant time and resources. This approach not only presents a new paradigm that can be applied to evaluations in various academic fields beyond mathematics education but also is expected to substantially enhance the efficiency and effectiveness of research activities.

Data-centric XAI-driven Data Imputation of Molecular Structure and QSAR Model for Toxicity Prediction of 3D Printing Chemicals (3D 프린팅 소재 화학물질의 독성 예측을 위한 Data-centric XAI 기반 분자 구조 Data Imputation과 QSAR 모델 개발)

  • ChanHyeok Jeong;SangYoun Kim;SungKu Heo;Shahzeb Tariq;MinHyeok Shin;ChangKyoo Yoo
    • Korean Chemical Engineering Research
    • /
    • v.61 no.4
    • /
    • pp.523-541
    • /
    • 2023
  • As accessibility to 3D printers increases, there is a growing frequency of exposure to chemicals associated with 3D printing. However, research on the toxicity and harmfulness of chemicals generated by 3D printing is insufficient, and the performance of toxicity prediction using in silico techniques is limited due to missing molecular structure data. In this study, quantitative structure-activity relationship (QSAR) model based on data-centric AI approach was developed to predict the toxicity of new 3D printing materials by imputing missing values in molecular descriptors. First, MissForest algorithm was utilized to impute missing values in molecular descriptors of hazardous 3D printing materials. Then, based on four different machine learning models (decision tree, random forest, XGBoost, SVM), a machine learning (ML)-based QSAR model was developed to predict the bioconcentration factor (Log BCF), octanol-air partition coefficient (Log Koa), and partition coefficient (Log P). Furthermore, the reliability of the data-centric QSAR model was validated through the Tree-SHAP (SHapley Additive exPlanations) method, which is one of explainable artificial intelligence (XAI) techniques. The proposed imputation method based on the MissForest enlarged approximately 2.5 times more molecular structure data compared to the existing data. Based on the imputed dataset of molecular descriptor, the developed data-centric QSAR model achieved approximately 73%, 76% and 92% of prediction performance for Log BCF, Log Koa, and Log P, respectively. Lastly, Tree-SHAP analysis demonstrated that the data-centric-based QSAR model achieved high prediction performance for toxicity information by identifying key molecular descriptors highly correlated with toxicity indices. Therefore, the proposed QSAR model based on the data-centric XAI approach can be extended to predict the toxicity of potential pollutants in emerging printing chemicals, chemical process, semiconductor or display process.

The Prediction of Cryptocurrency Prices Using eXplainable Artificial Intelligence based on Deep Learning (설명 가능한 인공지능과 CNN을 활용한 암호화폐 가격 등락 예측모형)

  • Taeho Hong;Jonggwan Won;Eunmi Kim;Minsu Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.129-148
    • /
    • 2023
  • Bitcoin is a blockchain technology-based digital currency that has been recognized as a representative cryptocurrency and a financial investment asset. Due to its highly volatile nature, Bitcoin has gained a lot of attention from investors and the public. Based on this popularity, numerous studies have been conducted on price and trend prediction using machine learning and deep learning. This study employed LSTM (Long Short Term Memory) and CNN (Convolutional Neural Networks), which have shown potential for predictive performance in the finance domain, to enhance the classification accuracy in Bitcoin price trend prediction. XAI(eXplainable Artificial Intelligence) techniques were applied to the predictive model to enhance its explainability and interpretability by providing a comprehensive explanation of the model. In the empirical experiment, CNN was applied to technical indicators and Google trend data to build a Bitcoin price trend prediction model, and the CNN model using both technical indicators and Google trend data clearly outperformed the other models using neural networks, SVM, and LSTM. Then SHAP(Shapley Additive exPlanations) was applied to the predictive model to obtain explanations about the output values. Important prediction drivers in input variables were extracted through global interpretation, and the interpretation of the predictive model's decision process for each instance was suggested through local interpretation. The results show that our proposed research framework demonstrates both improved classification accuracy and explainability by using CNN, Google trend data, and SHAP.

A reliable intelligent diagnostic assistant for nuclear power plants using explainable artificial intelligence of GRU-AE, LightGBM and SHAP

  • Park, Ji Hun;Jo, Hye Seon;Lee, Sang Hyun;Oh, Sang Won;Na, Man Gyun
    • Nuclear Engineering and Technology
    • /
    • v.54 no.4
    • /
    • pp.1271-1287
    • /
    • 2022
  • When abnormal operating conditions occur in nuclear power plants, operators must identify the occurrence cause and implement the necessary mitigation measures. Accordingly, the operator must rapidly and accurately analyze the symptom requirements of more than 200 abnormal scenarios from the trends of many variables to perform diagnostic tasks and implement mitigation actions rapidly. However, the probability of human error increases owing to the characteristics of the diagnostic tasks performed by the operator. Researches regarding diagnostic tasks based on Artificial Intelligence (AI) have been conducted recently to reduce the likelihood of human errors; however, reliability issues due to the black box characteristics of AI have been pointed out. Hence, the application of eXplainable Artificial Intelligence (XAI), which can provide AI diagnostic evidence for operators, is considered. In conclusion, the XAI to solve the reliability problem of AI is included in the AI-based diagnostic algorithm. A reliable intelligent diagnostic assistant based on a merged diagnostic algorithm, in the form of an operator support system, is developed, and includes an interface to efficiently inform operators.

Understanding Customer Purchasing Behavior in E-Commerce using Explainable Artificial Intelligence Techniques (XAI 기법을 이용한 전자상거래의 고객 구매 행동 이해)

  • Lee, Jaejun;Jeong, Ii Tae;Lim, Do Hyun;Kwahk, Kee-Young;Ahn, Hyunchul
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.387-390
    • /
    • 2021
  • 최근 전자 상거래 시장이 급격한 성장을 이루면서 고객들의 급변하는 니즈를 파악하는 것이 기업들의 수익에 직결되는 요소로 인식되고 있다. 이에 기업들은 고객들의 니즈를 신속하고 정확하게 파악하기 위해, 기축적된 고객 관련 각종 데이터를 활용하려는 시도를 강화하고 있다. 기존 시도들은 주로 구매 행동 예측에 중점을 두었으나 고객 행동의 전후 과정을 해석하는데 있어 어려움이 존재했다. 본 연구에서는 고객이 구매한 상품을 확정 또는 환불하는 행동을 취할 때 해당 행동이 발생하는데 있어 어떤 요소들이 작용하였는지를 파악하고, 어떤 고객이 환불할 지를 예측하는 예측 모형을 새롭게 제시한다. 예측 모형 구현에는 트리 기반 앙상블 방법을 사용해 예측력을 높인 XGBoost 기법을 적용하였으며, 고객 의도에 영향을 미치는 요소들을 파악하기 위하여 대표적인 설명가능한 인공지능(XAI) 기법 중 하나인 SHAP 기법을 적용하였다. 이를 통해 특정 고객 행동에 대한 각 요인들의 전반적인 영향 뿐만 아니라, 각 개별 고객에 대해서도 어떤 요소가 환불결정에 영향을 미쳤는지 파악할 수 있었다. 이를 통해 기업은 고객 개개인의 의사 결정에 영향을 미치는 요소를 파악하여 개인화 마케팅에 사용할 수 있을 것으로 기대된다.

  • PDF

Anomaly Detection using VGGNet for safety inspection of OPGW (광섬유 복합가공 지선(OPGW) 설비 안전점검을 위한 VGGNet 기반의 이상 탐지)

  • Kang, Gun-Ha;Sohn, Jung-Mo;Son, Do-Hyun;Han, Jeong-Ho
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.01a
    • /
    • pp.3-5
    • /
    • 2022
  • 본 연구는 VGGNet을 사용하여 광섬유 복합가공 지선 설비의 양/불량 판별을 수행한다. 광섬유 복합가공 지선이란, 전력선의 보호 및 전력 시설 간 통신을 담당하는 중요 설비로 고장 발생 전, 결함의 조기 발견 및 유지 관리가 중요하다. 현재 한국전력공사에서는 드론에서 촬영된 영상을 점검원이 이상 여부를 점검하는 방식이 주로 사용되고 있으나 이는 점검원의 숙련도, 경험에 따른 정확성 및 비용과 시간 측면에서 한계를 지니고 있다. 본 연구는 드론에서 촬영된 영상으로 VGGNet 기반의 양/불량 판정을 수행했다. 그 결과, 정확도 약 95.15%, 정밀도 약 96%, 재현율 약 95%, f1 score 약 95%의 성능을 확인하였다. 결과 확인 방법으로는 설명 가능한 인공지능(XAI) 알고리즘 중 하나인 Grad-CAM을 적용하였다. 이러한 광섬유 복합가공 지선 설비의 양/불량 판별은 점검원의 단순 작업에 대한 비용 및 점검 시간을 줄이며, 부가가치가 높은 업무에 집중할 수 있게 해준다. 또한, 고장 결함 발견에 있어서 객관적인 점검을 수행하기 때문에 일정한 점검 품질을 유지한다는 점에서 적용 가치가 있다.

  • PDF

Study on predictive model and mechanism analysis for martensite transformation temperatures through explainable artificial intelligence (설명가능한 인공지능을 통한 마르텐사이트 변태 온도 예측 모델 및 거동 분석 연구)

  • Junhyub Jeon;Seung Bae Son;Jae-Gil Jung;Seok-Jae Lee
    • Journal of the Korean Society for Heat Treatment
    • /
    • v.37 no.3
    • /
    • pp.103-113
    • /
    • 2024
  • Martensite volume fraction significantly affects the mechanical properties of alloy steels. Martensite start temperature (Ms), transformation temperature for martensite 50 vol.% (M50), and transformation temperature for martensite 90 vol.% (M90) are important transformation temperatures to control the martensite phase fraction. Several researchers proposed empirical equations and machine learning models to predict the Ms temperature. These numerical approaches can easily predict the Ms temperature without additional experiment and cost. However, to control martensite phase fraction more precisely, we need to reduce prediction error of the Ms model and propose prediction models for other martensite transformation temperatures (M50, M90). In the present study, machine learning model was applied to suggest the predictive model for the Ms, M50, M90 temperatures. To explain prediction mechanisms and suggest feature importance on martensite transformation temperature of machine learning models, the explainable artificial intelligence (XAI) is employed. Random forest regression (RFR) showed the best performance for predicting the Ms, M50, M90 temperatures using different machine learning models. The feature importance was proposed and the prediction mechanisms were discussed by XAI.

A Study on the Explainability of Inception Network-Derived Image Classification AI Using National Defense Data (국방 데이터를 활용한 인셉션 네트워크 파생 이미지 분류 AI의 설명 가능성 연구)

  • Kangun Cho
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.27 no.2
    • /
    • pp.256-264
    • /
    • 2024
  • In the last 10 years, AI has made rapid progress, and image classification, in particular, are showing excellent performance based on deep learning. Nevertheless, due to the nature of deep learning represented by a black box, it is difficult to actually use it in critical decision-making situations such as national defense, autonomous driving, medical care, and finance due to the lack of explainability of judgement results. In order to overcome these limitations, in this study, a model description algorithm capable of local interpretation was applied to the inception network-derived AI to analyze what grounds they made when classifying national defense data. Specifically, we conduct a comparative analysis of explainability based on confidence values by performing LIME analysis from the Inception v2_resnet model and verify the similarity between human interpretations and LIME explanations. Furthermore, by comparing the LIME explanation results through the Top1 output results for Inception v3, Inception v2_resnet, and Xception models, we confirm the feasibility of comparing the efficiency and availability of deep learning networks using XAI.