• Title/Summary/Keyword: explainable artificial intelligence

Search Result 59, Processing Time 0.022 seconds

A review of Explainable AI Techniques in Medical Imaging (의료영상 분야를 위한 설명가능한 인공지능 기술 리뷰)

  • Lee, DongEon;Park, ChunSu;Kang, Jeong-Woon;Kim, MinWoo
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.4
    • /
    • pp.259-270
    • /
    • 2022
  • Artificial intelligence (AI) has been studied in various fields of medical imaging. Currently, top-notch deep learning (DL) techniques have led to high diagnostic accuracy and fast computation. However, they are rarely used in real clinical practices because of a lack of reliability concerning their results. Most DL models can achieve high performance by extracting features from large volumes of data. However, increasing model complexity and nonlinearity turn such models into black boxes that are seldom accessible, interpretable, and transparent. As a result, scientific interest in the field of explainable artificial intelligence (XAI) is gradually emerging. This study aims to review diverse XAI approaches currently exploited in medical imaging. We identify the concepts of the methods, introduce studies applying them to imaging modalities such as computational tomography (CT), magnetic resonance imaging (MRI), and endoscopy, and lastly discuss limitations and challenges faced by XAI for future studies.

A Study on Human-AI Collaboration Process to Support Evidence-Based National Innovation Monitoring: Case Study on Ministry of Oceans and Fisheries (Human-AI 협력 프로세스 기반의 증거기반 국가혁신 모니터링 연구: 해양수산부 사례)

  • Jung Sun Lim;Seoung Hun Bae;Kil-Ho Ryu;Sang-Gook Kim
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.2
    • /
    • pp.22-31
    • /
    • 2023
  • Governments around the world are enacting laws mandating explainable traceability when using AI(Artificial Intelligence) to solve real-world problems. HAI(Human-Centric Artificial Intelligence) is an approach that induces human decision-making through Human-AI collaboration. This research presents a case study that implements the Human-AI collaboration to achieve explainable traceability in governmental data analysis. The Human-AI collaboration explored in this study performs AI inferences for generating labels, followed by AI interpretation to make results more explainable and traceable. The study utilized an example dataset from the Ministry of Oceans and Fisheries to reproduce the Human-AI collaboration process used in actual policy-making, in which the Ministry of Science and ICT utilized R&D PIE(R&D Platform for Investment and Evaluation) to build a government investment portfolio.

Development of ensemble machine learning model considering the characteristics of input variables and the interpretation of model performance using explainable artificial intelligence (수질자료의 특성을 고려한 앙상블 머신러닝 모형 구축 및 설명가능한 인공지능을 이용한 모형결과 해석에 대한 연구)

  • Park, Jungsu
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.36 no.4
    • /
    • pp.239-248
    • /
    • 2022
  • The prediction of algal bloom is an important field of study in algal bloom management, and chlorophyll-a concentration(Chl-a) is commonly used to represent the status of algal bloom. In, recent years advanced machine learning algorithms are increasingly used for the prediction of algal bloom. In this study, XGBoost(XGB), an ensemble machine learning algorithm, was used to develop a model to predict Chl-a in a reservoir. The daily observation of water quality data and climate data was used for the training and testing of the model. In the first step of the study, the input variables were clustered into two groups(low and high value groups) based on the observed value of water temperature(TEMP), total organic carbon concentration(TOC), total nitrogen concentration(TN) and total phosphorus concentration(TP). For each of the four water quality items, two XGB models were developed using only the data in each clustered group(Model 1). The results were compared to the prediction of an XGB model developed by using the entire data before clustering(Model 2). The model performance was evaluated using three indices including root mean squared error-observation standard deviation ratio(RSR). The model performance was improved using Model 1 for TEMP, TN, TP as the RSR of each model was 0.503, 0.477 and 0.493, respectively, while the RSR of Model 2 was 0.521. On the other hand, Model 2 shows better performance than Model 1 for TOC, where the RSR was 0.532. Explainable artificial intelligence(XAI) is an ongoing field of research in machine learning study. Shapley value analysis, a novel XAI algorithm, was also used for the quantitative interpretation of the XGB model performance developed in this study.

Analysis of Regional Fertility Gap Factors Using Explainable Artificial Intelligence (설명 가능한 인공지능을 이용한 지역별 출산율 차이 요인 분석)

  • Dongwoo Lee;Mi Kyung Kim;Jungyoon Yoon;Dongwon Ryu;Jae Wook Song
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.47 no.1
    • /
    • pp.41-50
    • /
    • 2024
  • Korea is facing a significant problem with historically low fertility rates, which is becoming a major social issue affecting the economy, labor force, and national security. This study analyzes the factors contributing to the regional gap in fertility rates and derives policy implications. The government and local authorities are implementing a range of policies to address the issue of low fertility. To establish an effective strategy, it is essential to identify the primary factors that contribute to regional disparities. This study identifies these factors and explores policy implications through machine learning and explainable artificial intelligence. The study also examines the influence of media and public opinion on childbirth in Korea by incorporating news and online community sentiment, as well as sentiment fear indices, as independent variables. To establish the relationship between regional fertility rates and factors, the study employs four machine learning models: multiple linear regression, XGBoost, Random Forest, and Support Vector Regression. Support Vector Regression, XGBoost, and Random Forest significantly outperform linear regression, highlighting the importance of machine learning models in explaining non-linear relationships with numerous variables. A factor analysis using SHAP is then conducted. The unemployment rate, Regional Gross Domestic Product per Capita, Women's Participation in Economic Activities, Number of Crimes Committed, Average Age of First Marriage, and Private Education Expenses significantly impact regional fertility rates. However, the degree of impact of the factors affecting fertility may vary by region, suggesting the need for policies tailored to the characteristics of each region, not just an overall ranking of factors.

Text Based Explainable AI for Monitoring National Innovations (텍스트 기반 Explainable AI를 적용한 국가연구개발혁신 모니터링)

  • Jung Sun Lim;Seoung Hun Bae
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.45 no.4
    • /
    • pp.1-7
    • /
    • 2022
  • Explainable AI (XAI) is an approach that leverages artificial intelligence to support human decision-making. Recently, governments of several countries including Korea are attempting objective evidence-based analyses of R&D investments with returns by analyzing quantitative data. Over the past decade, governments have invested in relevant researches, allowing government officials to gain insights to help them evaluate past performances and discuss future policy directions. Compared to the size that has not been used yet, the utilization of the text information (accumulated in national DBs) so far is low level. The current study utilizes a text mining strategy for monitoring innovations along with a case study of smart-farms in the Honam region.

Application of Explainable Artificial Intelligence for Predicting Hardness of AlSi10Mg Alloy Manufactured by Laser Powder Bed Fusion (레이저 분말 베드 용융법으로 제조된 AlSi10Mg 합금의 경도 예측을 위한 설명 가능한 인공지능 활용)

  • Junhyub Jeon;Namhyuk Seo;Min-Su Kim;Seung Bae Son;Jae-Gil Jung;Seok-Jae Lee
    • Journal of Powder Materials
    • /
    • v.30 no.3
    • /
    • pp.210-216
    • /
    • 2023
  • In this study, machine learning models are proposed to predict the Vickers hardness of AlSi10Mg alloys fabricated by laser powder bed fusion (LPBF). A total of 113 utilizable datasets were collected from the literature. The hyperparameters of the machine-learning models were adjusted to select an accurate predictive model. The random forest regression (RFR) model showed the best performance compared to support vector regression, artificial neural networks, and k-nearest neighbors. The variable importance and prediction mechanisms of the RFR were discussed by Shapley additive explanation (SHAP). Aging time had the greatest influence on the Vickers hardness, followed by solution time, solution temperature, layer thickness, scan speed, power, aging temperature, average particle size, and hatching distance. Detailed prediction mechanisms for RFR are analyzed using SHAP dependence plots.

A Big Data-Driven Business Data Analysis System: Applications of Artificial Intelligence Techniques in Problem Solving

  • Donggeun Kim;Sangjin Kim;Juyong Ko;Jai Woo Lee
    • The Journal of Bigdata
    • /
    • v.8 no.1
    • /
    • pp.35-47
    • /
    • 2023
  • It is crucial to develop effective and efficient big data analytics methods for problem-solving in the field of business in order to improve the performance of data analytics and reduce costs and risks in the analysis of customer data. In this study, a big data-driven data analysis system using artificial intelligence techniques is designed to increase the accuracy of big data analytics along with the rapid growth of the field of data science. We present a key direction for big data analysis systems through missing value imputation, outlier detection, feature extraction, utilization of explainable artificial intelligence techniques, and exploratory data analysis. Our objective is not only to develop big data analysis techniques with complex structures of business data but also to bridge the gap between the theoretical ideas in artificial intelligence methods and the analysis of real-world data in the field of business.

Transforming Patient Health Management: Insights from Explainable AI and Network Science Integration

  • Mi-Hwa Song
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.307-313
    • /
    • 2024
  • This study explores the integration of Explainable Artificial Intelligence (XAI) and network science in healthcare, focusing on enhancing healthcare data interpretation and improving diagnostic and treatment methods. Key methodologies like Graph Neural Networks, Community Detection, Overlapping Network Models, and Time-Series Network Analysis are examined in depth for their potential in patient health management. The research highlights the transformative role of XAI in making complex AI models transparent and interpretable, essential for accurate, data-driven decision-making in healthcare. Case studies demonstrate the practical application of these methodologies in predicting diseases, understanding drug interactions, and tracking patient health over time. The study concludes with the immense promise of these advancements in healthcare, despite existing challenges, and underscores the need for ongoing research to fully realize the potential of AI in this field.

Explainable radionuclide identification algorithm based on the convolutional neural network and class activation mapping

  • Yu Wang;Qingxu Yao;Quanhu Zhang;He Zhang;Yunfeng Lu;Qimeng Fan;Nan Jiang;Wangtao Yu
    • Nuclear Engineering and Technology
    • /
    • v.54 no.12
    • /
    • pp.4684-4692
    • /
    • 2022
  • Radionuclide identification is an important part of the nuclear material identification system. The development of artificial intelligence and machine learning has made nuclide identification rapid and automatic. However, many methods directly use existing deep learning models to analyze the gamma-ray spectrum, which lacks interpretability for researchers. This study proposes an explainable radionuclide identification algorithm based on the convolutional neural network and class activation mapping. This method shows the area of interest of the neural network on the gamma-ray spectrum by generating a class activation map. We analyzed the class activation map of the gamma-ray spectrum of different types, different gross counts, and different signal-to-noise ratios. The results show that the convolutional neural network attempted to learn the relationship between the input gamma-ray spectrum and the nuclide type, and could identify the nuclide based on the photoelectric peak and Compton edge. Furthermore, the results explain why the neural network could identify gamma-ray spectra with low counts and low signal-to-noise ratios. Thus, the findings improve researchers' confidence in the ability of neural networks to identify nuclides and promote the application of artificial intelligence methods in the field of nuclide identification.

An Exploratory Approach to Discovering Salary-Related Wording in Job Postings in Korea

  • Ha, Taehyun;Coh, Byoung-Youl;Lee, Mingook;Yun, Bitnari;Chun, Hong-Woo
    • Journal of Information Science Theory and Practice
    • /
    • v.10 no.spc
    • /
    • pp.86-95
    • /
    • 2022
  • Online recruitment websites discuss job demands in various fields, and job postings contain detailed job specifications. Analyzing this text can elucidate the features that determine job salaries. Text embedding models can learn the contextual information in a text, and explainable artificial intelligence frameworks can be used to examine in detail how text features contribute to the models' outputs. We collected 733,625 job postings using the WORKNET API and classified them into low, mid, and high-range salary groups. A text embedding model that predicts job salaries based on the text in job postings was trained with the collected data. Then, we applied the SHapley Additive exPlanations (SHAP) framework to the trained model and discovered the significant words that determine each salary class. Several limitations and remaining words are also discussed.