• Title/Summary/Keyword: Explainable AI

Search Result 55, Processing Time 0.028 seconds

AI-Enabled Business Models and Innovations: A Systematic Literature Review

  • Taoer Yang;Aqsa;Rafaqat Kazmi;Karthik Rajashekaran
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.6
    • /
    • pp.1518-1539
    • /
    • 2024
  • Artificial intelligence-enabled business models aim to improve decision-making, operational efficiency, innovation, and productivity. The presented systematic literature review is conducted to highlight elucidating the utilization of artificial intelligence (AI) methods and techniques within AI-enabled businesses, the significance and functions of AI-enabled organizational models and frameworks, and the design parameters employed in academic research studies within the AI-enabled business domain. We reviewed 39 empirical studies that were published between 2010 and 2023. The studies that were chosen are classified based on the artificial intelligence business technique, empirical research design, and SLR search protocol criteria. According to the findings, machine learning and artificial intelligence were reported as popular methods used for business process modelling in 19% of the studies. Healthcare was the most experimented business domain used for empirical evaluation in 28% of the primary research. The most common reason for using artificial intelligence in businesses was to improve business intelligence. 51% of main studies claimed to have been carried out as experiments. 53% of the research followed experimental guidelines and were repeatable. For the design of business process modelling, eighteen AI mythology were discovered, as well as seven types of AI modelling goals and principles for organisations. For AI-enabled business models, safety, security, and privacy are key concerns in society. The growth of AI is influencing novel forms of business.

Explainable Machine Learning Based a Packed Red Blood Cell Transfusion Prediction and Evaluation for Major Internal Medical Condition

  • Lee, Seongbin;Lee, Seunghee;Chang, Duhyeuk;Song, Mi-Hwa;Kim, Jong-Yeup;Lee, Suehyun
    • Journal of Information Processing Systems
    • /
    • v.18 no.3
    • /
    • pp.302-310
    • /
    • 2022
  • Efficient use of limited blood products is becoming very important in terms of socioeconomic status and patient recovery. To predict the appropriateness of patient-specific transfusions for the intensive care unit (ICU) patients who require real-time monitoring, we evaluated a model to predict the possibility of transfusion dynamically by using the Medical Information Mart for Intensive Care III (MIMIC-III), an ICU admission record at Harvard Medical School. In this study, we developed an explainable machine learning to predict the possibility of red blood cell transfusion for major medical diseases in the ICU. Target disease groups that received packed red blood cell transfusions at high frequency were selected and 16,222 patients were finally extracted. The prediction model achieved an area under the ROC curve of 0.9070 and an F1-score of 0.8166 (LightGBM). To explain the performance of the machine learning model, feature importance analysis and a partial dependence plot were used. The results of our study can be used as basic data for recommendations related to the adequacy of blood transfusions and are expected to ultimately contribute to the recovery of patients and prevention of excessive consumption of blood products.

IF2bNet: An Optimized Deep Learning Architecture for Fire Detection Based on Explainable AI (IF2bNet: 화재 감지를 위한 설명 가능 AI 기반 최적화된 딥러닝 아키텍처)

  • Won Jin;Mi-Hwa Song
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.719-720
    • /
    • 2024
  • 센서 기반의 자동화재탐지설비의 역할을 지원할 목적으로, 합성곱 신경망 기반의 AI 화재 감시장비등이 연구되어왔다. ai 기반 화재 감지에 사용되는 알고리즘은 전이학습을 주로 이용하고 있고, 이는 화재 감지에 기여도가 낮은 프로세스가 내장되어 있을 가능성이 존재하여, 딥러닝 모델의 복잡성을 가중시키는 원인이 될 수 있다. 본 연구에서는 이러한 모델의 복잡성을 개선하고자 다양한 딥러닝 및 해석 기술들을 분석하였고, 분석 결과를 토대로 화재 감지에 최적화된 아키텍처인 "IF2bNet" 을 제안한다. 구현한 아키텍처의 성능을 비교한 결과 동일한 성능을 내면서, 파라미터를 약 0.1 배로 경량화 하여, 복잡성을 완화하였다.

A Study on Explainable Artificial Intelligence-based Sentimental Analysis System Model

  • Song, Mi-Hwa
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.142-151
    • /
    • 2022
  • In this paper, a model combined with explanatory artificial intelligence (xAI) models was presented to secure the reliability of machine learning-based sentiment analysis and prediction. The applicability of the proposed model was tested and described using the IMDB dataset. This approach has an advantage in that it can explain how the data affects the prediction results of the model from various perspectives. In various applications of sentiment analysis such as recommendation system, emotion analysis through facial expression recognition, and opinion analysis, it is possible to gain trust from users of the system by presenting more specific and evidence-based analysis results to users.

Bankruptcy Prediction with Explainable Artificial Intelligence for Early-Stage Business Models

  • Tuguldur Enkhtuya;Dae-Ki Kang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.3
    • /
    • pp.58-65
    • /
    • 2023
  • Bankruptcy is a significant risk for start-up companies, but with the help of cutting-edge artificial intelligence technology, we can now predict bankruptcy with detailed explanations. In this paper, we implemented the Category Boosting algorithm following data cleaning and editing using OpenRefine. We further explained our model using the Shapash library, incorporating domain knowledge. By leveraging the 5C's credit domain knowledge, financial analysts in banks or investors can utilize the detailed results provided by our model to enhance their decision-making processes, even without extensive knowledge about AI. This empowers investors to identify potential bankruptcy risks in their business models, enabling them to make necessary improvements or reconsider their ventures before proceeding. As a result, our model serves as a "glass-box" model, allowing end-users to understand which specific financial indicators contribute to the prediction of bankruptcy. This transparency enhances trust and provides valuable insights for decision-makers in mitigating bankruptcy risks.

Efficient Gait Data Selection Using Explainable AI (해석 가능한 인공지능을 이용한 보행 데이터의 효율적인 선택)

  • Choi, Young-Chan;Tae, Min-Woo;Choi, Sang-Il
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.315-316
    • /
    • 2022
  • 본 논문은 스마트 인솔의 압력 데이터를 이용하는 컨볼루션 신경망 모델에 해석가능한 인공지능 방법인 Grad-CAM을 적용하는 방법을 제안한다. 학습된 각 모델에 Grad-CAM을 적용하여 모델에서 중요한 역할을 하는 압력센서와 중요하지 않은 압력센서를 알아내는 방법을 제안하고 데이터마다 학습을 진행하고 학습된 모델을 통해 실제로 중요한 압력센서와 그렇지 않은 압력센서에 대해서 알아본다.

  • PDF

Analysis of the impact of mathematics education research using explainable AI (설명가능한 인공지능을 활용한 수학교육 연구의 영향력 분석)

  • Oh, Se Jun
    • The Mathematical Education
    • /
    • v.62 no.3
    • /
    • pp.435-455
    • /
    • 2023
  • This study primarily focused on the development of an Explainable Artificial Intelligence (XAI) model to discern and analyze papers with significant impact in the field of mathematics education. To achieve this, meta-information from 29 domestic and international mathematics education journals was utilized to construct a comprehensive academic research network in mathematics education. This academic network was built by integrating five sub-networks: 'paper and its citation network', 'paper and author network', 'paper and journal network', 'co-authorship network', and 'author and affiliation network'. The Random Forest machine learning model was employed to evaluate the impact of individual papers within the mathematics education research network. The SHAP, an XAI model, was used to analyze the reasons behind the AI's assessment of impactful papers. Key features identified for determining impactful papers in the field of mathematics education through the XAI included 'paper network PageRank', 'changes in citations per paper', 'total citations', 'changes in the author's h-index', and 'citations per paper of the journal'. It became evident that papers, authors, and journals play significant roles when evaluating individual papers. When analyzing and comparing domestic and international mathematics education research, variations in these discernment patterns were observed. Notably, the significance of 'co-authorship network PageRank' was emphasized in domestic mathematics education research. The XAI model proposed in this study serves as a tool for determining the impact of papers using AI, providing researchers with strategic direction when writing papers. For instance, expanding the paper network, presenting at academic conferences, and activating the author network through co-authorship were identified as major elements enhancing the impact of a paper. Based on these findings, researchers can have a clear understanding of how their work is perceived and evaluated in academia and identify the key factors influencing these evaluations. This study offers a novel approach to evaluating the impact of mathematics education papers using an explainable AI model, traditionally a process that consumed significant time and resources. This approach not only presents a new paradigm that can be applied to evaluations in various academic fields beyond mathematics education but also is expected to substantially enhance the efficiency and effectiveness of research activities.

XAI(Explainable AI) 기법을 이용한 선박기관 이상탐지 시스템 개발

  • Habtemariam Duguma Yeshitla;Agung Nugraha;Antariksa Gian
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2022.11a
    • /
    • pp.289-290
    • /
    • 2022
  • 본 연구에서는 선박의 중요부품인 메인엔진에서 수집되는 센서 데이터를 사용하여 선박 메인엔진의 이상치를 탐지하는 시스템을 소개한다. 본 시스템의 특장점은 이상치 탐지 뿐만 아니라, 이상치의 센서별 기여도를 정량화 함으로써, 이상치 발생을 유형화 하고 추가적인 분석을 가능하게 해준다. 또한 웹 인터페이스 형태의 편리한 UI를 개발하여 사용자들이 보다 편리하게 이상치

  • PDF

Exploration of Factors on Pre-service Science Teachers' Major Satisfaction and Academic Satisfaction Using Machine Learning and Explainable AI SHAP (머신러닝과 설명가능한 인공지능 SHAP을 활용한 사범대 과학교육 전공생의 전공만족도 및 학업만족도 영향요인 탐색)

  • Jibeom Seo;Nam-Hwa Kang
    • Journal of Science Education
    • /
    • v.47 no.1
    • /
    • pp.37-51
    • /
    • 2023
  • This study explored the factors influencing major satisfaction and academic satisfaction of science education major students at the College of Education using machine learning models, random forest, gradient boosting model, and SHAP. Analysis results showed that the performance of the gradient boosting model was better than that of the random forest, but the difference was not large. Factors influencing major satisfaction include 'satisfaction with science teachers in high school corresponding to the subject of one's major', 'motivation for teaching job', and 'age'. Through the SHAP value, the influence of variables was identified, and the results were derived for the group as a whole and for individual analysis. The comprehensive and individual results could be complementary with each other. Based on the research results, implications for ways to support pre-service science teachers' major and academic satisfaction were proposed.

A Study on XAI-based Clinical Decision Support System (XAI 기반의 임상의사결정시스템에 관한 연구)

  • Ahn, Yoon-Ae;Cho, Han-Jin
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.12
    • /
    • pp.13-22
    • /
    • 2021
  • The clinical decision support system uses accumulated medical data to apply an AI model learned by machine learning to patient diagnosis and treatment prediction. However, the existing black box-based AI application does not provide a valid reason for the result predicted by the system, so there is a limitation in that it lacks explanation. To compensate for these problems, this paper proposes a system model that applies XAI that can be explained in the development stage of the clinical decision support system. The proposed model can supplement the limitations of the black box by additionally applying a specific XAI technology that can be explained to the existing AI model. To show the application of the proposed model, we present an example of XAI application using LIME and SHAP. Through testing, it is possible to explain how data affects the prediction results of the model from various perspectives. The proposed model has the advantage of increasing the user's trust by presenting a specific reason to the user. In addition, it is expected that the active use of XAI will overcome the limitations of the existing clinical decision support system and enable better diagnosis and decision support.