• Title/Summary/Keyword: 설명가능한 인공지능

Search Result 105, Processing Time 0.045 seconds

A Gradient-Based Explanation Method for Graph Convolutional Neural Networks (그래프 합성곱 신경망에 대한 기울기(Gradient) 기반 설명 기법)

  • Kim, Chaehyeon;Lee, Ki Yong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.670-673
    • /
    • 2022
  • 설명가능한 인공지능은 딥러닝과 같은 복잡한 모델에서 어떠한 원리로 해당 결과를 도출해냈는지에 대한 설명을 함으로써 구축된 모델을 이해할 수 있도록 설명하는 기술이다. 최근 여러 분야에서 그래프 형태의 데이터들이 생성되고 있으며, 이들에 대한 분류를 위해 다양한 그래프 신경망들이 사용되고 있다. 본 논문에서는 대표적인 그래프 신경망인 그래프 합성곱 신경망(graph convolutional network, GCN)에 대한 설명 기법을 제안한다. 제안 기법은 주어진 그래프의 각 노드를 GCN을 사용하여 분류했을 때, 각 노드의 어떤 특징들이 분류에 가장 큰 영향을 미쳤는지를 수치로 알려준다. 제안 기법은 최종 분류 결과에 영향을 미친 요소들을 gradient를 통해 단계적으로 추적함으로써 각 노드의 어떤 특징들이 분류에 중요한 역할을 했는지 파악한다. 가상 데이터를 통한 실험을 통해 제안 방법은 분류에 가장 큰 영향을 주는 노드들의 특징들을 실제로 정확히 찾아냄을 확인하였다.

Speed Prediction and Analysis of Nearby Road Causality Using Explainable Deep Graph Neural Network (설명 가능 그래프 심층 인공신경망 기반 속도 예측 및 인근 도로 영향력 분석 기법)

  • Kim, Yoo Jin;Yoon, Young
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.1
    • /
    • pp.51-62
    • /
    • 2022
  • AI-based speed prediction studies have been conducted quite actively. However, while the importance of explainable AI is emerging, the study of interpreting and reasoning the AI-based speed predictions has not been carried out much. Therefore, in this paper, 'Explainable Deep Graph Neural Network (GNN)' is devised to analyze the speed prediction and assess the nearby road influence for reasoning the critical contributions to a given road situation. The model's output was explained by comparing the differences in output before and after masking the input values of the GNN model. Using TOPIS traffic speed data, we applied our GNN models for the major congested roads in Seoul. We verified our approach through a traffic flow simulation by adjusting the most influential nearby roads' speed and observing the congestion's relief on the road of interest accordingly. This is meaningful in that our approach can be applied to the transportation network and traffic flow can be improved by controlling specific nearby roads based on the inference results.

The Prediction of the Helpfulness of Online Review Based on Review Content Using an Explainable Graph Neural Network (설명가능한 그래프 신경망을 활용한 리뷰 콘텐츠 기반의 유용성 예측모형)

  • Eunmi Kim;Yao Ziyan;Taeho Hong
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.309-323
    • /
    • 2023
  • As the role of online reviews has become increasingly crucial, numerous studies have been conducted to utilize helpful reviews. Helpful reviews, perceived by customers, have been verified in various research studies to be influenced by factors such as ratings, review length, review content, and so on. The determination of a review's helpfulness is generally based on the number of 'helpful' votes from consumers, with more 'helpful' votes considered to have a more significant impact on consumers' purchasing decisions. However, recently written reviews that have not been exposed to many customers may have relatively few 'helpful' votes and may lack 'helpful' votes altogether due to a lack of participation. Therefore, rather than relying on the number of 'helpful' votes to assess the helpfulness of reviews, we aim to classify them based on review content. In addition, the text of the review emerges as the most influential factor in review helpfulness. This study employs text mining techniques, including topic modeling and sentiment analysis, to analyze the diverse impacts of content and emotions embedded in the review text. In this study, we propose a review helpfulness prediction model based on review content, utilizing movie reviews from IMDb, a global movie information site. We construct a review helpfulness prediction model by using an explainable Graph Neural Network (GNN), while addressing the interpretability limitations of the machine learning model. The explainable graph neural network is expected to provide more reliable information about helpful or non-helpful reviews as it can identify connections between reviews.

A Comparative Analysis of Ensemble Learning-Based Classification Models for Explainable Term Deposit Subscription Forecasting (설명 가능한 정기예금 가입 여부 예측을 위한 앙상블 학습 기반 분류 모델들의 비교 분석)

  • Shin, Zian;Moon, Jihoon;Rho, Seungmin
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.3
    • /
    • pp.97-117
    • /
    • 2021
  • Predicting term deposit subscriptions is one of representative financial marketing in banks, and banks can build a prediction model using various customer information. In order to improve the classification accuracy for term deposit subscriptions, many studies have been conducted based on machine learning techniques. However, even if these models can achieve satisfactory performance, utilizing them is not an easy task in the industry when their decision-making process is not adequately explained. To address this issue, this paper proposes an explainable scheme for term deposit subscription forecasting. For this, we first construct several classification models using decision tree-based ensemble learning methods, which yield excellent performance in tabular data, such as random forest, gradient boosting machine (GBM), extreme gradient boosting (XGB), and light gradient boosting machine (LightGBM). We then analyze their classification performance in depth through 10-fold cross-validation. After that, we provide the rationale for interpreting the influence of customer information and the decision-making process by applying Shapley additive explanation (SHAP), an explainable artificial intelligence technique, to the best classification model. To verify the practicality and validity of our scheme, experiments were conducted with the bank marketing dataset provided by Kaggle; we applied the SHAP to the GBM and LightGBM models, respectively, according to different dataset configurations and then performed their analysis and visualization for explainable term deposit subscriptions.

Artificial Intelligence and College Mathematics Education (인공지능(Artificial Intelligence)과 대학수학교육)

  • Lee, Sang-Gu;Lee, Jae Hwa;Ham, Yoonmee
    • Communications of Mathematical Education
    • /
    • v.34 no.1
    • /
    • pp.1-15
    • /
    • 2020
  • Today's healthcare, intelligent robots, smart home systems, and car sharing are already innovating with cutting-edge information and communication technologies such as Artificial Intelligence (AI), the Internet of Things, the Internet of Intelligent Things, and Big data. It is deeply affecting our lives. In the factory, robots have been working for humans more than several decades (FA, OA), AI doctors are also working in hospitals (Dr. Watson), AI speakers (Giga Genie) and AI assistants (Siri, Bixby, Google Assistant) are working to improve Natural Language Process. Now, in order to understand AI, knowledge of mathematics becomes essential, not a choice. Thus, mathematicians have been given a role in explaining such mathematics that make these things possible behind AI. Therefore, the authors wrote a textbook 'Basic Mathematics for Artificial Intelligence' by arranging the mathematics concepts and tools needed to understand AI and machine learning in one or two semesters, and organized lectures for undergraduate and graduate students of various majors to explore careers in artificial intelligence. In this paper, we share our experience of conducting this class with the full contents in http://matrix.skku.ac.kr/math4ai/.

A Service Model Development Plan for Countering Denial of Service Attacks based on Artificial Intelligence Technology (인공지능 기술기반의 서비스거부공격 대응 위한 서비스 모델 개발 방안)

  • Kim, Dong-Maeong;Jo, In-June
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.2
    • /
    • pp.587-593
    • /
    • 2021
  • In this thesis, we will break away from the classic DDoS response system for large-scale denial-of-service attacks that develop day by day, and effectively endure intelligent denial-of-service attacks by utilizing artificial intelligence-based technology, one of the core technologies of the 4th revolution. A possible service model development plan was proposed. That is, a method to detect denial of service attacks and minimize damage through machine learning artificial intelligence learning targeting a large amount of data collected from multiple security devices and web servers was proposed. In particular, the development of a model for using artificial intelligence technology is to detect a Western service attack by focusing on the fact that when a service denial attack occurs while repeating a certain traffic change and transmitting data in a stable flow, a different pattern of data flow is shown. Artificial intelligence technology was used. When a denial of service attack occurs, a deviation between the probability-based actual traffic and the predicted value occurs, so it is possible to respond by judging as aggressiveness data. In this paper, a service denial attack detection model was explained by analyzing data based on logs generated from security equipment or servers.

A Study on the Explainability of Inception Network-Derived Image Classification AI Using National Defense Data (국방 데이터를 활용한 인셉션 네트워크 파생 이미지 분류 AI의 설명 가능성 연구)

  • Kangun Cho
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.27 no.2
    • /
    • pp.256-264
    • /
    • 2024
  • In the last 10 years, AI has made rapid progress, and image classification, in particular, are showing excellent performance based on deep learning. Nevertheless, due to the nature of deep learning represented by a black box, it is difficult to actually use it in critical decision-making situations such as national defense, autonomous driving, medical care, and finance due to the lack of explainability of judgement results. In order to overcome these limitations, in this study, a model description algorithm capable of local interpretation was applied to the inception network-derived AI to analyze what grounds they made when classifying national defense data. Specifically, we conduct a comparative analysis of explainability based on confidence values by performing LIME analysis from the Inception v2_resnet model and verify the similarity between human interpretations and LIME explanations. Furthermore, by comparing the LIME explanation results through the Top1 output results for Inception v3, Inception v2_resnet, and Xception models, we confirm the feasibility of comparing the efficiency and availability of deep learning networks using XAI.

Explainable Artificial Intelligence (XAI) Surrogate Models for Chemical Process Design and Analysis (화학 공정 설계 및 분석을 위한 설명 가능한 인공지능 대안 모델)

  • Yuna Ko;Jonggeol Na
    • Korean Chemical Engineering Research
    • /
    • v.61 no.4
    • /
    • pp.542-549
    • /
    • 2023
  • Since the growing interest in surrogate modeling, there has been continuous research aimed at simulating nonlinear chemical processes using data-driven machine learning. However, the opaque nature of machine learning models, which limits their interpretability, poses a challenge for their practical application in industry. Therefore, this study aims to analyze chemical processes using Explainable Artificial Intelligence (XAI), a concept that improves interpretability while ensuring model accuracy. While conventional sensitivity analysis of chemical processes has been limited to calculating and ranking the sensitivity indices of variables, we propose a methodology that utilizes XAI to not only perform global and local sensitivity analysis, but also examine the interactions among variables to gain physical insights from the data. For the ammonia synthesis process, which is the target process of the case study, we set the temperature of the preheater leading to the first reactor and the split ratio of the cold shot to the three reactors as process variables. By integrating Matlab and Aspen Plus, we obtained data on ammonia production and the maximum temperatures of the three reactors while systematically varying the process variables. We then trained tree-based models and performed sensitivity analysis using the SHAP technique, one of the XAI methods, on the most accurate model. The global sensitivity analysis showed that the preheater temperature had the greatest effect, and the local sensitivity analysis provided insights for defining the ranges of process variables to improve productivity and prevent overheating. By constructing alternative models for chemical processes and using XAI for sensitivity analysis, this work contributes to providing both quantitative and qualitative feedback for process optimization.

Content protocol as seen in Kommunikeme (코무니케메 원망법으로 본 콘텐츠 프로토콜)

  • Kim, Hyo yeun
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2017.05a
    • /
    • pp.185-186
    • /
    • 2017
  • 2017년 2월 카이럴 솔리톤의 발견으로 4진법 소자가 가능해지면서 인공지능 시대의 새로운 소자로 주목받고 있다. 이는 기존의 컴퓨터 시대의 소자인 2진법의 한계를 넘어서는 것이다. 플루서의 코무니콜로기(소통학)은 소통의 철학적 담론과 체계를 제시하고 있다. 콘텐츠 프로토콜에 대한 소통학적 반성은 물리학에서 발견한 4진법의 가능성을 철학적으로 설명해줄 수 있으며, 물리학의 문제 해결에 새로운 발상을 제공해 줄 수 있다. 이는 콘텐츠 기반 연구의 학제적 연구 모델이 될 수 있다.

  • PDF

함수암호 기술 연구 동향

  • Seo, Minhye
    • Review of KIISC
    • /
    • v.32 no.1
    • /
    • pp.31-38
    • /
    • 2022
  • 함수암호(functional encryption)는 프라이버시를 보호하면서 암호화된 데이터에 대한 연산을 수행할 수 있는 진화된 형태의 암호 기술이다. 비밀키를 가진 수신자에게 평문을 전부 제공하는 기존의 암호와 달리, 함수암호는 특정 연산에 대응하는 비밀키를 가진 수신자에게 평문에 대한 연산 결과만을 제공하기 때문에 데이터에 대한 유연한(fine-grained) 접근 제어가 가능하다. 인공지능과 같은 4차 산업혁명 시대의 대표 기술들은 데이터의 활용을 기반으로 하지만 이 과정에서 데이터 노출로 인한 사용자 프라이버시 침해 문제가 발생할 수 있다. 함수암호는 이러한 문제를 해결할 수 있는 기술로써, 프라이버시 보호와 데이터 경제 활성화를 위한 기반 기술로 활용될 수 있다. 본 논문에서는 함수암호 기술에 대한 개념을 설명하고 관련 연구 동향을 소개한다.