• Title/Summary/Keyword: Explainable Artificial Intelligence

Search Result 64, Processing Time 0.026 seconds

Research on Mining Technology for Explainable Decision Making (설명가능한 의사결정을 위한 마이닝 기술)

  • Kyungyong Chung
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.4
    • /
    • pp.186-191
    • /
    • 2023
  • Data processing techniques play a critical role in decision-making, including handling missing and outlier data, prediction, and recommendation models. This requires a clear explanation of the validity, reliability, and accuracy of all processes and results. In addition, it is necessary to solve data problems through explainable models using decision trees, inference, etc., and proceed with model lightweight by considering various types of learning. The multi-layer mining classification method that applies the sixth principle is a method that discovers multidimensional relationships between variables and attributes that occur frequently in transactions after data preprocessing. This explains how to discover significant relationships using mining on transactions and model the data through regression analysis. It develops scalable models and logistic regression models and proposes mining techniques to generate class labels through data cleansing, relevance analysis, data transformation, and data augmentation to make explanatory decisions.

Trustworthy AI Framework for Malware Response (악성코드 대응을 위한 신뢰할 수 있는 AI 프레임워크)

  • Shin, Kyounga;Lee, Yunho;Bae, ByeongJu;Lee, Soohang;Hong, Heeju;Choi, Youngjin;Lee, Sangjin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.5
    • /
    • pp.1019-1034
    • /
    • 2022
  • Malware attacks become more prevalent in the hyper-connected society of the 4th industrial revolution. To respond to such malware, automation of malware detection using artificial intelligence technology is attracting attention as a new alternative. However, using artificial intelligence without collateral for its reliability poses greater risks and side effects. The EU and the United States are seeking ways to secure the reliability of artificial intelligence, and the government announced a reliable strategy for realizing artificial intelligence in 2021. The government's AI reliability has five attributes: Safety, Explainability, Transparency, Robustness and Fairness. We develop four elements of safety, explainable, transparent, and fairness, excluding robustness in the malware detection model. In particular, we demonstrated stable generalization performance, which is model accuracy, through the verification of external agencies, and developed focusing on explainability including transparency. The artificial intelligence model, of which learning is determined by changing data, requires life cycle management. As a result, demand for the MLops framework is increasing, which integrates data, model development, and service operations. EXE-executable malware and documented malware response services become data collector as well as service operation at the same time, and connect with data pipelines which obtain information for labeling and purification through external APIs. We have facilitated other security service associations or infrastructure scaling using cloud SaaS and standard APIs.

Analysis of Input Factors of DNN Forecasting Model Using Layer-wise Relevance Propagation of Neural Network (신경망의 계층 연관성 전파를 이용한 DNN 예보모델의 입력인자 분석)

  • Yu, SukHyun
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.8
    • /
    • pp.1122-1137
    • /
    • 2021
  • PM2.5 concentration in Seoul could be predicted by deep neural network model. In this paper, the contribution of input factors to the model's prediction results is analyzed using the LRP(Layer-wise Relevance Propagation) technique. LRP analysis is performed by dividing the input data by time and PM concentration, respectively. As a result of the analysis by time, the contribution of the measurement factors is high in the forecast for the day, and those of the forecast factors are high in the forecast for the tomorrow and the day after tomorrow. In the case of the PM concentration analysis, the contribution of the weather factors is high in the low-concentration pattern, and that of the air quality factors is high in the high-concentration pattern. In addition, the date and the temperature factors contribute significantly regardless of time and concentration.

Discovering AI-enabled convergences based on BERT and topic network

  • Ji Min Kim;Seo Yeon Lee;Won Sang Lee
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.1022-1034
    • /
    • 2023
  • Various aspects of artificial intelligence (AI) have become of significant interest to academia and industry in recent times. To satisfy these academic and industrial interests, it is necessary to comprehensively investigate trends in AI-related changes of diverse areas. In this study, we identified and predicted emerging convergences with the help of AI-associated research abstracts collected from the SCOPUS database. The bidirectional encoder representations obtained via the transformers-based topic discovery technique were subsequently deployed to identify emerging topics related to AI. The topics discovered concern edge computing, biomedical algorithms, predictive defect maintenance, medical applications, fake news detection with block chain, explainable AI and COVID-19 applications. Their convergences were further analyzed based on the shortest path between topics to predict emerging convergences. Our findings indicated emerging AI convergences towards healthcare, manufacturing, legal applications, and marketing. These findings are expected to have policy implications for facilitating the convergences in diverse industries. Potentially, this study could contribute to the exploitation and adoption of AI-enabled convergences from a practical perspective.

Crime Prediction and Factor Analysis of Incheon Metropolitan City Using Explainable Artificial Intelligence (설명 가능 인공지능 기술을 적용한 인천광역시 범죄 예측 및 요인 분석)

  • Kim, Da-Hyun;Kim, You-Kyung;Kim, Hyon-Hee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.513-515
    • /
    • 2022
  • 본 연구는 범죄를 발생시키는데 관련된 여러가지 요인들을 기반으로 범죄 예측 모델을 생성하고 설명 가능 인공지능 기술을 적용하여 인천 광역시를 대상으로 범죄 발생에 영향을 미치는 요인들을 분석하였다. 범죄 예측 모델 생성을 위해 XG Boost 알고리즘을 적용하였으며, 설명 가능 인공지능 기술로는 Shapley Additive exPlanations (SHAP)을 사용하였다. 기존 관련 사례들을 참고하여 범죄 예측에 사용된 변수를 선정하였고 변수에 대한 데이터는 공공 데이터를 수집하였다. 실험 결과 성매매단속 현황과 청소년 실종 가출 신고 현황이 범죄 발생에 큰 영향을 미치는 주요 요인으로 나타났다. 제안하는 모델은 범죄 발생 지역, 요인들을 미리 예측하여 제시함으로써 범죄 예방에 사용되는 인력자원, 물적자원 등을 용이하게 쓸 수 있도록 활용할 수 있다.

Efficient Gait Data Selection Using Explainable AI (해석 가능한 인공지능을 이용한 보행 데이터의 효율적인 선택)

  • Choi, Young-Chan;Tae, Min-Woo;Choi, Sang-Il
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.315-316
    • /
    • 2022
  • 본 논문은 스마트 인솔의 압력 데이터를 이용하는 컨볼루션 신경망 모델에 해석가능한 인공지능 방법인 Grad-CAM을 적용하는 방법을 제안한다. 학습된 각 모델에 Grad-CAM을 적용하여 모델에서 중요한 역할을 하는 압력센서와 중요하지 않은 압력센서를 알아내는 방법을 제안하고 데이터마다 학습을 진행하고 학습된 모델을 통해 실제로 중요한 압력센서와 그렇지 않은 압력센서에 대해서 알아본다.

  • PDF

Application of XAI Models to Determine Employment Factors in the Software Field : with focus on University and Vocational College Graduates (소프트웨어 분야 취업 결정 요인에 대한 XAI 모델 적용 연구 : 일반대학교와 전문대학 졸업자를 중심으로)

  • Kwon Joonhee;Kim Sungrim
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.20 no.1
    • /
    • pp.31-45
    • /
    • 2024
  • The purpose of this study is to explain employment factors in the software field. For it, the Graduates Occupational Mobility Survey by the Korea employment information service is used. This paper proposes employment models in the software field using machine learning. Then, it explains employment factors of the models using explainable artificial intelligence. The models focus on both university graduates and vocational college graduates. Our works explain and interpret both black box model and glass box model. The SHAP and EBM explanation are used to interpret black box model and glass box model, respectively. The results describes that positive employment impact factors are major, vocational education and training, employment preparation setting semester, and intern experience in the employment models. This study provides a job preparation guide to universitiy and vocational college students that want to work in software field.

Corporate Bankruptcy Prediction Model using Explainable AI-based Feature Selection (설명가능 AI 기반의 변수선정을 이용한 기업부실예측모형)

  • Gundoo Moon;Kyoung-jae Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.241-265
    • /
    • 2023
  • A corporate insolvency prediction model serves as a vital tool for objectively monitoring the financial condition of companies. It enables timely warnings, facilitates responsive actions, and supports the formulation of effective management strategies to mitigate bankruptcy risks and enhance performance. Investors and financial institutions utilize default prediction models to minimize financial losses. As the interest in utilizing artificial intelligence (AI) technology for corporate insolvency prediction grows, extensive research has been conducted in this domain. However, there is an increasing demand for explainable AI models in corporate insolvency prediction, emphasizing interpretability and reliability. The SHAP (SHapley Additive exPlanations) technique has gained significant popularity and has demonstrated strong performance in various applications. Nonetheless, it has limitations such as computational cost, processing time, and scalability concerns based on the number of variables. This study introduces a novel approach to variable selection that reduces the number of variables by averaging SHAP values from bootstrapped data subsets instead of using the entire dataset. This technique aims to improve computational efficiency while maintaining excellent predictive performance. To obtain classification results, we aim to train random forest, XGBoost, and C5.0 models using carefully selected variables with high interpretability. The classification accuracy of the ensemble model, generated through soft voting as the goal of high-performance model design, is compared with the individual models. The study leverages data from 1,698 Korean light industrial companies and employs bootstrapping to create distinct data groups. Logistic Regression is employed to calculate SHAP values for each data group, and their averages are computed to derive the final SHAP values. The proposed model enhances interpretability and aims to achieve superior predictive performance.

The Latest Trends in Attention Mechanisms and Their Application in Medical Imaging (어텐션 기법 및 의료 영상에의 적용에 관한 최신 동향)

  • Hyungseob Shin;Jeongryong Lee;Taejoon Eo;Yohan Jun;Sewon Kim;Dosik Hwang
    • Journal of the Korean Society of Radiology
    • /
    • v.81 no.6
    • /
    • pp.1305-1333
    • /
    • 2020
  • Deep learning has recently achieved remarkable results in the field of medical imaging. However, as a deep learning network becomes deeper to improve its performance, it becomes more difficult to interpret the processes within. This can especially be a critical problem in medical fields where diagnostic decisions are directly related to a patient's survival. In order to solve this, explainable artificial intelligence techniques are being widely studied, and an attention mechanism was developed as part of this approach. In this paper, attention techniques are divided into two types: post hoc attention, which aims to analyze a network that has already been trained, and trainable attention, which further improves network performance. Detailed comparisons of each method, examples of applications in medical imaging, and future perspectives will be covered.

A Proposal of Sensor-based Time Series Classification Model using Explainable Convolutional Neural Network

  • Jang, Youngjun;Kim, Jiho;Lee, Hongchul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.5
    • /
    • pp.55-67
    • /
    • 2022
  • Sensor data can provide fault diagnosis for equipment. However, the cause analysis for fault results of equipment is not often provided. In this study, we propose an explainable convolutional neural network framework for the sensor-based time series classification model. We used sensor-based time series dataset, acquired from vehicles equipped with sensors, and the Wafer dataset, acquired from manufacturing process. Moreover, we used Cycle Signal dataset, acquired from real world mechanical equipment, and for Data augmentation methods, scaling and jittering were used to train our deep learning models. In addition, our proposed classification models are convolutional neural network based models, FCN, 1D-CNN, and ResNet, to compare evaluations for each model. Our experimental results show that the ResNet provides promising results in the context of time series classification with accuracy and F1 Score reaching 95%, improved by 3% compared to the previous study. Furthermore, we propose XAI methods, Class Activation Map and Layer Visualization, to interpret the experiment result. XAI methods can visualize the time series interval that shows important factors for sensor data classification.