• Title/Summary/Keyword: XAI

Search Result 87, Processing Time 0.036 seconds

The Enhancement of intrusion detection reliability using Explainable Artificial Intelligence(XAI) (설명 가능한 인공지능(XAI)을 활용한 침입탐지 신뢰성 강화 방안)

  • Jung Il Ok;Choi Woo Bin;Kim Su Chul
    • Convergence Security Journal
    • /
    • v.22 no.3
    • /
    • pp.101-110
    • /
    • 2022
  • As the cases of using artificial intelligence in various fields increase, attempts to solve various issues through artificial intelligence in the intrusion detection field are also increasing. However, the black box basis, which cannot explain or trace the reasons for the predicted results through machine learning, presents difficulties for security professionals who must use it. To solve this problem, research on explainable AI(XAI), which helps interpret and understand decisions in machine learning, is increasing in various fields. Therefore, in this paper, we propose an explanatory AI to enhance the reliability of machine learning-based intrusion detection prediction results. First, the intrusion detection model is implemented through XGBoost, and the description of the model is implemented using SHAP. And it provides reliability for security experts to make decisions by comparing and analyzing the existing feature importance and the results using SHAP. For this experiment, PKDD2007 dataset was used, and the association between existing feature importance and SHAP Value was analyzed, and it was verified that SHAP-based explainable AI was valid to give security experts the reliability of the prediction results of intrusion detection models.

Research on Understanding Churned Customer and Application of Marketing in Telco. industry Using XAI (XAI를 활용한 통신사 이탈고객의 특성 이해와 마케팅 적용방안 연구)

  • Lim, Jinhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.21-24
    • /
    • 2022
  • 최근 통신업계에서는 축적된 빅데이터를 활용하여 고객의 특성을 이해하고 맞춤형 마케팅에 이용하려는 노력이 지속되어 왔다. 본 연구에서는 CatBoost 모델을 사용하여 이탈 가능성이 높은 고객을 예측하고 XAI(eXplainable Artificial Intelligence) 기법 중 하나인 SHAP을 적용하여 이탈에 영향을 미치는 요인을 설명하고자 하였다. SHAP의 global explanation 기법을 사용하여 특정 고객 segmentation 에 대한 이해력을 높이고, local explanation 기법을 사용하여 개별 고객에 대한 설명과 개인화 마케팅에 적용 가능성을 제시하였다. 본 연구는 기존의 이탈 예측모델인 블랙박스 모델이 갖는 한계점을 극복하고 고객의 특성을 이해하여 실제 비즈니스에 활용 가능성을 높였다는 점에서 의의를 가진다.

A Study on Classification Models for Predicting Bankruptcy using XAI (XAI 를 활용한 기업 부도예측 분류모델 연구)

  • Kim, Jihong;Moon, Nammee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.571-573
    • /
    • 2022
  • 최근 금융기관에서는 축적된 금융 빅데이터를 활용하여 차별화된 서비스를 강화하고 있다. 기업고객에 투자하기 위해서는 보다 정밀한 기업분석이 필요하다. 본 연구는 대만기업 6,819개의 95개 재무데이터를 가지고, 비대칭 데이터 문제해결, 데이터 표준화 등 데이터 전처리 작업을 하였다. 해당 데이터는 로지스틱 회기, SVM, K-NN, 나이브 베이즈, 의사결정나무, 랜덤포레스트 등 9가지 분류모델에 5겹 교차검증을 적용하여 학습한 후 모델 성능을 비교하였다. 이 중에서 성능이 가장 우수한 분류모델을 선택하여 예측 결정 이유를 판단하고자 설명 가능한 인공지능(XAI)을 적용하여 예측 결과에 대한 설명을 부여하여 이를 분석하였다. 본 연구를 통해 데이터 전처리에서부터 모델 예측 결과 설명에 이르는 분류예측모델의 전주기를 자동화하는 시스템을 제시하고자 한다.

Transforming Patient Health Management: Insights from Explainable AI and Network Science Integration

  • Mi-Hwa Song
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.307-313
    • /
    • 2024
  • This study explores the integration of Explainable Artificial Intelligence (XAI) and network science in healthcare, focusing on enhancing healthcare data interpretation and improving diagnostic and treatment methods. Key methodologies like Graph Neural Networks, Community Detection, Overlapping Network Models, and Time-Series Network Analysis are examined in depth for their potential in patient health management. The research highlights the transformative role of XAI in making complex AI models transparent and interpretable, essential for accurate, data-driven decision-making in healthcare. Case studies demonstrate the practical application of these methodologies in predicting diseases, understanding drug interactions, and tracking patient health over time. The study concludes with the immense promise of these advancements in healthcare, despite existing challenges, and underscores the need for ongoing research to fully realize the potential of AI in this field.

Application of XAI Models to Determine Employment Factors in the Software Field : with focus on University and Vocational College Graduates (소프트웨어 분야 취업 결정 요인에 대한 XAI 모델 적용 연구 : 일반대학교와 전문대학 졸업자를 중심으로)

  • Kwon Joonhee;Kim Sungrim
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.20 no.1
    • /
    • pp.31-45
    • /
    • 2024
  • The purpose of this study is to explain employment factors in the software field. For it, the Graduates Occupational Mobility Survey by the Korea employment information service is used. This paper proposes employment models in the software field using machine learning. Then, it explains employment factors of the models using explainable artificial intelligence. The models focus on both university graduates and vocational college graduates. Our works explain and interpret both black box model and glass box model. The SHAP and EBM explanation are used to interpret black box model and glass box model, respectively. The results describes that positive employment impact factors are major, vocational education and training, employment preparation setting semester, and intern experience in the employment models. This study provides a job preparation guide to universitiy and vocational college students that want to work in software field.

A Study on Effective Adversarial Attack Creation for Robustness Improvement of AI Models (AI 모델의 Robustness 향상을 위한 효율적인 Adversarial Attack 생성 방안 연구)

  • Si-on Jeong;Tae-hyun Han;Seung-bum Lim;Tae-jin Lee
    • Journal of Internet Computing and Services
    • /
    • v.24 no.4
    • /
    • pp.25-36
    • /
    • 2023
  • Today, as AI (Artificial Intelligence) technology is introduced in various fields, including security, the development of technology is accelerating. However, with the development of AI technology, attack techniques that cleverly bypass malicious behavior detection are also developing. In the classification process of AI models, an Adversarial attack has emerged that induces misclassification and a decrease in reliability through fine adjustment of input values. The attacks that will appear in the future are not new attacks created by an attacker but rather a method of avoiding the detection system by slightly modifying existing attacks, such as Adversarial attacks. Developing a robust model that can respond to these malware variants is necessary. In this paper, we propose two methods of generating Adversarial attacks as efficient Adversarial attack generation techniques for improving Robustness in AI models. The proposed technique is the XAI-based attack technique using the XAI technique and the Reference based attack through the model's decision boundary search. After that, a classification model was constructed through a malicious code dataset to compare performance with the PGD attack, one of the existing Adversarial attacks. In terms of generation speed, XAI-based attack, and reference-based attack take 0.35 seconds and 0.47 seconds, respectively, compared to the existing PGD attack, which takes 20 minutes, showing a very high speed, especially in the case of reference-based attack, 97.7%, which is higher than the existing PGD attack's generation rate of 75.5%. Therefore, the proposed technique enables more efficient Adversarial attacks and is expected to contribute to research to build a robust AI model in the future.

Explainable Artificial Intelligence Applied in Deep Learning for Review Helpfulness Prediction (XAI 기법을 이용한 리뷰 유용성 예측 결과 설명에 관한 연구)

  • Dongyeop Ryu;Xinzhe Li;Jaekyeong Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.35-56
    • /
    • 2023
  • With the development of information and communication technology, numerous reviews are continuously posted on websites, which causes information overload problems. Therefore, users face difficulty in exploring reviews for their decision-making. To solve such a problem, many studies on review helpfulness prediction have been actively conducted to provide users with helpful and reliable reviews. Existing studies predict review helpfulness mainly based on the features included in the review. However, such studies disable providing the reason why predicted reviews are helpful. Therefore, this study aims to propose a methodology for applying eXplainable Artificial Intelligence (XAI) techniques in review helpfulness prediction to address such a limitation. This study uses restaurant reviews collected from Yelp.com to compare the prediction performance of six models widely used in previous studies. Next, we propose an explainable review helpfulness prediction model by applying the XAI technique to the model with the best prediction performance. Therefore, the methodology proposed in this study can recommend helpful reviews in the user's purchasing decision-making process and provide the interpretation of why such predicted reviews are helpful.

Performance improvement of artificial neural network based water quality prediction model using explainable artificial intelligence technology (설명가능한 인공지능 기술을 이용한 인공신경망 기반 수질예측 모델의 성능향상)

  • Lee, Won Jin;Lee, Eui Hoon
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.11
    • /
    • pp.801-813
    • /
    • 2023
  • Recently, as studies about Artificial Neural Network (ANN) are actively progressing, studies for predicting water quality of rivers using ANN are being conducted. However, it is difficult to analyze the operation process inside ANN, because ANN is form of Black-box. Although eXplainable Artificial Intelligence (XAI) is used to analyze the computational process of ANN, research using XAI technology in the field of water resources is insufficient. This study analyzed Multi Layer Perceptron (MLP) to predict Water Temperature (WT), Dissolved Oxygen (DO), hydrogen ion concentration (pH) and Chlorophyll-a (Chl-a) at the Dasan water quality observatory in the Nakdong river using Layer-wise Relevance Propagation (LRP) among XAI technologies. The MLP that learned water quality was analyzed using LRP to select the optimal input data to predict water quality, and the prediction results of the MLP learned using the optimal input data were analyzed. As a result of selecting the optimal input data using LRP, the prediction accuracy of MLP, which learned the input data except daily precipitation in the surrounding area, was the highest. Looking at the analysis of MLP's DO prediction results, it was analyzed that the pH and DO a had large influence at the highest point, and the effect of WT was large at the lowest point.

XAI(Explainable AI) 기법을 이용한 선박기관 이상탐지 시스템 개발

  • Habtemariam Duguma Yeshitla;Agung Nugraha;Antariksa Gian
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2022.11a
    • /
    • pp.289-290
    • /
    • 2022
  • 본 연구에서는 선박의 중요부품인 메인엔진에서 수집되는 센서 데이터를 사용하여 선박 메인엔진의 이상치를 탐지하는 시스템을 소개한다. 본 시스템의 특장점은 이상치 탐지 뿐만 아니라, 이상치의 센서별 기여도를 정량화 함으로써, 이상치 발생을 유형화 하고 추가적인 분석을 가능하게 해준다. 또한 웹 인터페이스 형태의 편리한 UI를 개발하여 사용자들이 보다 편리하게 이상치

  • PDF

Why Should I Ban You! : X-FDS (Explainable FDS) Model Based on Online Game Payment Log (X-FDS : 게임 결제 로그 기반 XAI적용 이상 거래탐지 모델 연구)

  • Lee, Young Hun;Kim, Huy Kang
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.1
    • /
    • pp.25-38
    • /
    • 2022
  • With the diversification of payment methods and games, related financial accidents are causing serious problems for users and game companies. Recently, game companies have introduced an Fraud Detection System (FDS) for game payment systems to prevent financial incident. However, FDS is ineffective and cannot provide major evidence based on judgment results, as it requires constant change of detection patterns. In this paper, we analyze abnormal transactions among payment log data of real game companies to generate related features. One of the unsupervised learning models, Autoencoder, was used to build a model to detect abnormal transactions, which resulted in over 85% accuracy. Using X-FDS (Explainable FDS) with XAI-SHAP, we could understand that the variables with the highest explanation for anomaly detection were the amount of transaction, transaction medium, and the age of users. Based on X-FDS, we derive an improved detection model with an accuracy of 94% was finally derived by fine-tuning the importance of features that adversely affect the proposed model.