• Title/Summary/Keyword: SHAP model

Search Result 64, Processing Time 0.024 seconds

A Data-Driven Causal Analysis on Fatal Accidents in Construction Industry (건설 사고사례 데이터 기반 건설업 사망사고 요인분석)

  • Jiyoon Choi;Sihyeon Kim;Songe Lee;Kyunghun Kim;Sudong Lee
    • Journal of the Korea Safety Management & Science
    • /
    • v.25 no.3
    • /
    • pp.63-71
    • /
    • 2023
  • The construction industry stands out for its higher incidence of accidents in comparison to other sectors. A causal analysis of the accidents is necessary for effective prevention. In this study, we propose a data-driven causal analysis to find significant factors of fatal construction accidents. We collected 14,318 cases of structured and text data of construction accidents from the Construction Safety Management Integrated Information (CSI). For the variables in the collected dataset, we first analyze their patterns and correlations with fatal construction accidents by statistical analysis. In addition, machine learning algorithms are employed to develop a classification model for fatal accidents. The integration of SHAP (SHapley Additive exPlanations) allows for the identification of root causes driving fatal incidents. As a result, the outcome reveals the significant factors and keywords wielding notable influence over fatal accidents within construction contexts.

Who Gets Government SME R&D Subsidy? Application of Gradient Boosting Model (Gradient Boosting 모형을 이용한 중소기업 R&D 지원금 결정요인 분석)

  • Kang, Sung Won;Kang, HeeChan
    • The Journal of Society for e-Business Studies
    • /
    • v.25 no.4
    • /
    • pp.77-109
    • /
    • 2020
  • In this paper, we build a gradient Boosting model to predict government SME R&D subsidy, select features of high importance, and measure the impact of each features to the predicted subsidy using PDP and SHAP value. Unlike previous empirical researches, we focus on the effect of the R&D subsidy distribution pattern to the incentive of the firms participating subsidy competition. We used the firm data constructed by KISTEP linking government R&D subsidy record with financial statements provided by NICE, and applied a Gradient Boosting model to predict R&D subsidy. We found that firms with higher R&D performance and larger R&D investment tend to have higher R&D subsidies, but firms with higher operation profit or total asset turnover rate tend to have lower R&D subsidies. Our results suggest that current government R&D subsidy distribution pattern provides incentive to improve R&D project performance, but not business performance.

Development of Tree Detection Methods for Estimating LULUCF Settlement Greenhouse Gas Inventories Using Vegetation Indices (식생지수를 활용한 LULUCF 정주지 온실가스 인벤토리 산정을 위한 수목탐지 방법 개발)

  • Joon-Woo Lee;Yu-Han Han;Jeong-Taek Lee;Jin-Hyuk Park;Geun-Han Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_3
    • /
    • pp.1721-1730
    • /
    • 2023
  • As awareness of the problem of global warming emerges around the world, the role of carbon sinks in settlement is increasingly emphasized to achieve carbon neutrality in urban areas. In order to manage carbon sinks in settlement, it is necessary to identify the current status of carbon sinks. Identifying the status of carbon sinks requires a lot of manpower and time and a corresponding budget. Therefore, in this study, a map predicting the location of trees was created using already established tree location information and Sentinel-2 satellite images targeting Seoul. To this end, after constructing a tree presence/absence dataset, structured data was generated using 16 types of vegetation indices information constructed from satellite images. After learning this by applying the Extreme Gradient Boosting (XGBoost) model, a tree prediction map was created. Afterward, the correlation between independent and dependent variables was investigated in model learning using the Shapely value of Shapley Additive exPlanations(SHAP). A comparative analysis was performed between maps produced for local parts of Seoul and sub-categorized land cover maps. In the case of the tree prediction model produced in this study, it was confirmed that even hard-to-detect street trees around the main street were predicted as trees.

Data-centric XAI-driven Data Imputation of Molecular Structure and QSAR Model for Toxicity Prediction of 3D Printing Chemicals (3D 프린팅 소재 화학물질의 독성 예측을 위한 Data-centric XAI 기반 분자 구조 Data Imputation과 QSAR 모델 개발)

  • ChanHyeok Jeong;SangYoun Kim;SungKu Heo;Shahzeb Tariq;MinHyeok Shin;ChangKyoo Yoo
    • Korean Chemical Engineering Research
    • /
    • v.61 no.4
    • /
    • pp.523-541
    • /
    • 2023
  • As accessibility to 3D printers increases, there is a growing frequency of exposure to chemicals associated with 3D printing. However, research on the toxicity and harmfulness of chemicals generated by 3D printing is insufficient, and the performance of toxicity prediction using in silico techniques is limited due to missing molecular structure data. In this study, quantitative structure-activity relationship (QSAR) model based on data-centric AI approach was developed to predict the toxicity of new 3D printing materials by imputing missing values in molecular descriptors. First, MissForest algorithm was utilized to impute missing values in molecular descriptors of hazardous 3D printing materials. Then, based on four different machine learning models (decision tree, random forest, XGBoost, SVM), a machine learning (ML)-based QSAR model was developed to predict the bioconcentration factor (Log BCF), octanol-air partition coefficient (Log Koa), and partition coefficient (Log P). Furthermore, the reliability of the data-centric QSAR model was validated through the Tree-SHAP (SHapley Additive exPlanations) method, which is one of explainable artificial intelligence (XAI) techniques. The proposed imputation method based on the MissForest enlarged approximately 2.5 times more molecular structure data compared to the existing data. Based on the imputed dataset of molecular descriptor, the developed data-centric QSAR model achieved approximately 73%, 76% and 92% of prediction performance for Log BCF, Log Koa, and Log P, respectively. Lastly, Tree-SHAP analysis demonstrated that the data-centric-based QSAR model achieved high prediction performance for toxicity information by identifying key molecular descriptors highly correlated with toxicity indices. Therefore, the proposed QSAR model based on the data-centric XAI approach can be extended to predict the toxicity of potential pollutants in emerging printing chemicals, chemical process, semiconductor or display process.

A Study on Classification Models for Predicting Bankruptcy Based on XAI (XAI 기반 기업부도예측 분류모델 연구)

  • Jihong Kim;Nammee Moon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.333-340
    • /
    • 2023
  • Efficient prediction of corporate bankruptcy is an important part of making appropriate lending decisions for financial institutions and reducing loan default rates. In many studies, classification models using artificial intelligence technology have been used. In the financial industry, even if the performance of the new predictive models is excellent, it should be accompanied by an intuitive explanation of the basis on which the result was determined. Recently, the US, EU, and South Korea have commonly presented the right to request explanations of algorithms, so transparency in the use of AI in the financial sector must be secured. In this paper, an artificial intelligence-based interpretable classification prediction model was proposed using corporate bankruptcy data that was open to the outside world. First, data preprocessing, 5-fold cross-validation, etc. were performed, and classification performance was compared through optimization of 10 supervised learning classification models such as logistic regression, SVM, XGBoost, and LightGBM. As a result, LightGBM was confirmed as the best performance model, and SHAP, an explainable artificial intelligence technique, was applied to provide a post-explanation of the bankruptcy prediction process.

A Machine Learning-based Popularity Prediction Model for YouTube Mukbang Content (머신러닝 기반의 유튜브 먹방 콘텐츠 인기 예측 모델)

  • Beomgeun Seo;Hanjun Lee
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.49-55
    • /
    • 2023
  • In this study, models for predicting the popularity of mukbang content on YouTube were proposed, and factors influencing the popularity of mukbang content were identified through post-analysis. To accomplish this, information on 22,223 pieces of content was collected from top mukbang channels in terms of subscribers using APIs and Pretty Scale. Machine learning algorithms such as Random Forest, XGBoost, and LGBM were used to build models for predicting views and likes. The results of SHAP analysis showed that subscriber count had the most significant impact on view prediction models, while the attractiveness of a creator emerged as the most important variable in the likes prediction model. This confirmed that the precursor factors for content views and likes reactions differ. This study holds academic significance in analyzing a large amount of online content and conducting empirical analysis. It also has practical significance as it informs mukbang creators about viewer content consumption trends and provides guidance for producing high-quality, marketable content.

Analysis of the impact of mathematics education research using explainable AI (설명가능한 인공지능을 활용한 수학교육 연구의 영향력 분석)

  • Oh, Se Jun
    • The Mathematical Education
    • /
    • v.62 no.3
    • /
    • pp.435-455
    • /
    • 2023
  • This study primarily focused on the development of an Explainable Artificial Intelligence (XAI) model to discern and analyze papers with significant impact in the field of mathematics education. To achieve this, meta-information from 29 domestic and international mathematics education journals was utilized to construct a comprehensive academic research network in mathematics education. This academic network was built by integrating five sub-networks: 'paper and its citation network', 'paper and author network', 'paper and journal network', 'co-authorship network', and 'author and affiliation network'. The Random Forest machine learning model was employed to evaluate the impact of individual papers within the mathematics education research network. The SHAP, an XAI model, was used to analyze the reasons behind the AI's assessment of impactful papers. Key features identified for determining impactful papers in the field of mathematics education through the XAI included 'paper network PageRank', 'changes in citations per paper', 'total citations', 'changes in the author's h-index', and 'citations per paper of the journal'. It became evident that papers, authors, and journals play significant roles when evaluating individual papers. When analyzing and comparing domestic and international mathematics education research, variations in these discernment patterns were observed. Notably, the significance of 'co-authorship network PageRank' was emphasized in domestic mathematics education research. The XAI model proposed in this study serves as a tool for determining the impact of papers using AI, providing researchers with strategic direction when writing papers. For instance, expanding the paper network, presenting at academic conferences, and activating the author network through co-authorship were identified as major elements enhancing the impact of a paper. Based on these findings, researchers can have a clear understanding of how their work is perceived and evaluated in academia and identify the key factors influencing these evaluations. This study offers a novel approach to evaluating the impact of mathematics education papers using an explainable AI model, traditionally a process that consumed significant time and resources. This approach not only presents a new paradigm that can be applied to evaluations in various academic fields beyond mathematics education but also is expected to substantially enhance the efficiency and effectiveness of research activities.

Prediction of Disk Cutter Wear Considering Ground Conditions and TBM Operation Parameters (지반 조건과 TBM 운영 파라미터를 고려한 디스크 커터 마모 예측)

  • Yunseong Kang;Tae Young Ko
    • Tunnel and Underground Space
    • /
    • v.34 no.2
    • /
    • pp.143-153
    • /
    • 2024
  • Tunnel Boring Machine (TBM) method is a tunnel excavation method that produces lower levels of noise and vibration during excavation compared to drilling and blasting methods, and it offers higher stability. It is increasingly being applied to tunnel projects worldwide. The disc cutter is an excavation tool mounted on the cutterhead of a TBM, which constantly interacts with the ground at the tunnel face, inevitably leading to wear. In this study quantitatively predicted disc cutter wear using geological conditions, TBM operational parameters, and machine learning algorithms. Among the input variables for predicting disc cutter wear, the Uniaxial Compressive Strength (UCS) is considerably limited compared to machine and wear data, so the UCS estimation for the entire section was first conducted using TBM machine data, and then the prediction of the Coefficient of Wearing rate(CW) was performed with the completed data. Comparing the performance of CW prediction models, the XGBoost model showed the highest performance, and SHapley Additive exPlanation (SHAP) analysis was conducted to interpret the complex prediction model.

Application of XAI Models to Determine Employment Factors in the Software Field : with focus on University and Vocational College Graduates (소프트웨어 분야 취업 결정 요인에 대한 XAI 모델 적용 연구 : 일반대학교와 전문대학 졸업자를 중심으로)

  • Kwon Joonhee;Kim Sungrim
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.20 no.1
    • /
    • pp.31-45
    • /
    • 2024
  • The purpose of this study is to explain employment factors in the software field. For it, the Graduates Occupational Mobility Survey by the Korea employment information service is used. This paper proposes employment models in the software field using machine learning. Then, it explains employment factors of the models using explainable artificial intelligence. The models focus on both university graduates and vocational college graduates. Our works explain and interpret both black box model and glass box model. The SHAP and EBM explanation are used to interpret black box model and glass box model, respectively. The results describes that positive employment impact factors are major, vocational education and training, employment preparation setting semester, and intern experience in the employment models. This study provides a job preparation guide to universitiy and vocational college students that want to work in software field.

A Study on Efficient AI Model Drift Detection Methods for MLOps (MLOps를 위한 효율적인 AI 모델 드리프트 탐지방안 연구)

  • Ye-eun Lee;Tae-jin Lee
    • Journal of Internet Computing and Services
    • /
    • v.24 no.5
    • /
    • pp.17-27
    • /
    • 2023
  • Today, as AI (Artificial Intelligence) technology develops and its practicality increases, it is widely used in various application fields in real life. At this time, the AI model is basically learned based on various statistical properties of the learning data and then distributed to the system, but unexpected changes in the data in a rapidly changing data situation cause a decrease in the model's performance. In particular, as it becomes important to find drift signals of deployed models in order to respond to new and unknown attacks that are constantly created in the security field, the need for lifecycle management of the entire model is gradually emerging. In general, it can be detected through performance changes in the model's accuracy and error rate (loss), but there are limitations in the usage environment in that an actual label for the model prediction result is required, and the detection of the point where the actual drift occurs is uncertain. there is. This is because the model's error rate is greatly influenced by various external environmental factors, model selection and parameter settings, and new input data, so it is necessary to precisely determine when actual drift in the data occurs based only on the corresponding value. There are limits to this. Therefore, this paper proposes a method to detect when actual drift occurs through an Anomaly analysis technique based on XAI (eXplainable Artificial Intelligence). As a result of testing a classification model that detects DGA (Domain Generation Algorithm), anomaly scores were extracted through the SHAP(Shapley Additive exPlanations) Value of the data after distribution, and as a result, it was confirmed that efficient drift point detection was possible.