• Title/Summary/Keyword: Predicting Default

Search Result 22, Processing Time 0.035 seconds

Bayesian Inference for Predicting the Default Rate Using the Power Prior

  • Kim, Seong-W.;Son, Young-Sook;Choi, Sang-A
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.3
    • /
    • pp.685-699
    • /
    • 2006
  • Commercial banks and other related areas have developed internal models to better quantify their financial risks. Since an appropriate credit risk model plays a very important role in the risk management at financial institutions, it needs more accurate model which forecasts the credit losses, and statistical inference on that model is required. In this paper, we propose a new method for estimating a default rate. It is a Bayesian approach using the power prior which allows for incorporating of historical data to estimate the default rate. Inference on current data could be more reliable if there exist similar data based on previous studies. Ibrahim and Chen (2000) utilize these data to characterize the power prior. It allows for incorporating of historical data to estimate the parameters in the models. We demonstrate our methodologies with a real data set regarding SOHO data and also perform a simulation study.

Semi-Supervised Learning to Predict Default Risk for P2P Lending (준지도학습 기반의 P2P 대출 부도 위험 예측에 대한 연구)

  • Kim, Hyun-jung
    • Journal of Digital Convergence
    • /
    • v.20 no.4
    • /
    • pp.185-192
    • /
    • 2022
  • This study investigates the effect of the semi-supervised learning(SSL) method on predicting default risk of peer-to-peer(P2P) loans. Despite its proven performance, the supervised learning(SL) method requires labeled data, which may require a lot of effort and resources to collect. With the rapid growth of P2P platforms, the number of loans issued annually that have no clear final resolution is continuously increasing leading to abundance in unlabeled data. The research data of P2P loans used in this study were collected on the LendingClub platform. This is why an SSL model is needed to predict the default risk by using not only information from labeled loans(fully paid or defaulted) but also information from unlabeled loans. The results showed that in terms of default risk prediction and despite the use of a small number of labeled data, the SSL method achieved a much better default risk prediction performance than the SL method trained using a much larger set of labeled data.

TeGCN:Transformer-embedded Graph Neural Network for Thin-filer default prediction (TeGCN:씬파일러 신용평가를 위한 트랜스포머 임베딩 기반 그래프 신경망 구조 개발)

  • Seongsu Kim;Junho Bae;Juhyeon Lee;Heejoo Jung;Hee-Woong Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.419-437
    • /
    • 2023
  • As the number of thin filers in Korea surpasses 12 million, there is a growing interest in enhancing the accuracy of assessing their credit default risk to generate additional revenue. Specifically, researchers are actively pursuing the development of default prediction models using machine learning and deep learning algorithms, in contrast to traditional statistical default prediction methods, which struggle to capture nonlinearity. Among these efforts, Graph Neural Network (GNN) architecture is noteworthy for predicting default in situations with limited data on thin filers. This is due to their ability to incorporate network information between borrowers alongside conventional credit-related data. However, prior research employing graph neural networks has faced limitations in effectively handling diverse categorical variables present in credit information. In this study, we introduce the Transformer embedded Graph Convolutional Network (TeGCN), which aims to address these limitations and enable effective default prediction for thin filers. TeGCN combines the TabTransformer, capable of extracting contextual information from categorical variables, with the Graph Convolutional Network, which captures network information between borrowers. Our TeGCN model surpasses the baseline model's performance across both the general borrower dataset and the thin filer dataset. Specially, our model performs outstanding results in thin filer default prediction. This study achieves high default prediction accuracy by a model structure tailored to characteristics of credit information containing numerous categorical variables, especially in the context of thin filers with limited data. Our study can contribute to resolving the financial exclusion issues faced by thin filers and facilitate additional revenue within the financial industry.

Predicting Default Risk among Young Adults with Random Forest Algorithm (랜덤포레스트 모델을 활용한 청년층 차입자의 채무 불이행 위험 연구)

  • Lee, Jonghee
    • Journal of Family Resource Management and Policy Review
    • /
    • v.26 no.3
    • /
    • pp.19-34
    • /
    • 2022
  • There are growing concerns about debt insolvency among youth and low-income households. The deterioration in household debt quality among young people is due to a combination of sluggish employment, an increase in student loan burden and an increase in high-interest loans from the secondary financial sector. The purpose of this study was to explore the possibility of household debt default among young borrowers in Korea and to predict the factors affecting this possibility. This study utilized the 2021 Household Finance and Welfare Survey and used random forest algorithm to comprehensively analyze factors related to the possibility of default risk among young adults. This study presented the importance index and partial dependence charts of major determinants. This study found that the ratio of debt to assets(DTA), medical costs, household default risk index (HDRI), communication costs, and housing costs the focal independent variables.

Optimization of SWAN Wave Model to Improve the Accuracy of Winter Storm Wave Prediction in the East Sea

  • Son, Bongkyo;Do, Kideok
    • Journal of Ocean Engineering and Technology
    • /
    • v.35 no.4
    • /
    • pp.273-286
    • /
    • 2021
  • In recent years, as human casualties and property damage caused by hazardous waves have increased in the East Sea, precise wave prediction skills have become necessary. In this study, the Simulating WAves Nearshore (SWAN) third-generation numerical wave model was calibrated and optimized to enhance the accuracy of winter storm wave prediction in the East Sea. We used Source Term 6 (ST6) and physical observations from a large-scale experiment conducted in Australia and compared its results to Komen's formula, a default in SWAN. As input wind data, we used Korean Meteorological Agency's (KMA's) operational meteorological model called Regional Data Assimilation and Prediction System (RDAPS), the European Centre for Medium Range Weather Forecasts' newest 5th generation re-analysis data (ERA5), and Japanese Meteorological Agency's (JMA's) meso-scale forecasting data. We analyzed the accuracy of each model's results by comparing them to observation data. For quantitative analysis and assessment, the observed wave data for 6 locations from KMA and Korea Hydrographic and Oceanographic Agency (KHOA) were used, and statistical analysis was conducted to assess model accuracy. As a result, ST6 models had a smaller root mean square error and higher correlation coefficient than the default model in significant wave height prediction. However, for peak wave period simulation, the results were incoherent among each model and location. In simulations with different wind data, the simulation using ERA5 for input wind datashowed the most accurate results overall but underestimated the wave height in predicting high wave events compared to the simulation using RDAPS and JMA meso-scale model. In addition, it showed that the spatial resolution of wind plays a more significant role in predicting high wave events. Nevertheless, the numerical model optimized in this study highlighted some limitations in predicting high waves that rise rapidly in time caused by meteorological events. This suggests that further research is necessary to enhance the accuracy of wave prediction in various climate conditions, such as extreme weather.

Artificial Intelligence Techniques for Predicting Online Peer-to-Peer(P2P) Loan Default (인공지능기법을 이용한 온라인 P2P 대출거래의 채무불이행 예측에 관한 실증연구)

  • Bae, Jae Kwon;Lee, Seung Yeon;Seo, Hee Jin
    • The Journal of Society for e-Business Studies
    • /
    • v.23 no.3
    • /
    • pp.207-224
    • /
    • 2018
  • In this article, an empirical study was conducted by using public dataset from Lending Club Corporation, the largest online peer-to-peer (P2P) lending in the world. We explore significant predictor variables related to P2P lending default that housing situation, length of employment, average current balance, debt-to-income ratio, loan amount, loan purpose, interest rate, public records, number of finance trades, total credit/credit limit, number of delinquent accounts, number of mortgage accounts, and number of bank card accounts are significant factors to loan funded successful on Lending Club platform. We developed online P2P lending default prediction models using discriminant analysis, logistic regression, neural networks, and decision trees (i.e., CART and C5.0) in order to predict P2P loan default. To verify the feasibility and effectiveness of P2P lending default prediction models, borrower loan data and credit data used in this study. Empirical results indicated that neural networks outperforms other classifiers such as discriminant analysis, logistic regression, CART, and C5.0. Neural networks always outperforms other classifiers in P2P loan default prediction.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Default Voting using User Coefficient of Variance in Collaborative Filtering System (협력적 여과 시스템에서 사용자 변동 계수를 이용한 기본 평가간 예측)

  • Ko, Su-Jeong
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.11
    • /
    • pp.1111-1120
    • /
    • 2005
  • In collaborative filtering systems most users do not rate preferences; so User-Item matrix shows great sparsity because it has missing values for items not rated by users. Generally, the systems predict the preferences of an active user based on the preferences of a group of users. However, default voting methods predict all missing values for all users in User-Item matrix. One of the most common methods predicting default voting values tried two different approaches using the average rating for a user or using the average rating for an item. However, there is a problem that they did not consider the characteristics of items, users, and the distribution of data set. We replace the missing values in the User-Item matrix by the default noting method using user coefficient of variance. We select the threshold of user coefficient of variance by using equations automatically and determine when to shift between the user averages and item averages according to the threshold. However, there are not always regular relations between the averages and the thresholds of user coefficient of variances in datasets. It is caused that the distribution information of user coefficient of variances in datasets affects the threshold of user coefficient of variance as well as their average. We decide the threshold of user coefficient of valiance by combining them. We evaluate our method on MovieLens dataset of user ratings for movies and show that it outperforms previously default voting methods.

Predicting Default of Construction Companies Using Bayesian Probabilistic Approach (베이지안 확률적 접근법을 이용한 건설업체 부도 예측에 관한 연구)

  • Hong, Sungmoon;Hwang, Jaeyeon;Kwon, Taewhan;Kim, Juhyung;Kim, Jaejun
    • Korean Journal of Construction Engineering and Management
    • /
    • v.17 no.5
    • /
    • pp.13-21
    • /
    • 2016
  • Insolvency of construction companies that play the role of main contractors can lead to clients' losses due to non-fulfillment of construction contracts, and it can have negative effects on the financial soundness of construction companies and suppliers. The construction industry has the cash flow financial characteristic of receiving a project and getting payment based on the progress of the construction. As such, insolvency during project progress can lead to financial losses, which is why the prediction of construction companies is so important. The prediction of insolvency of Korean construction companies are often made through the KMV model from the KMV (Kealhofer McQuown and Vasicek) Company developed in the U.S. during the early 90s, but this model is insufficient in predicting construction companies because it was developed based on credit risk assessment of general companies and banks. In addition, the predictive performance of KMV value's insolvency probability is continuously being questioned due to lack of number of analyzed companies and data. Therefore, in order to resolve such issues, the Bayesian Probabilistic Approach is to be combined with the existing insolvency predictive probability model. This is because if the Prior Probability of Bayesian statistics can be appropriately predicted, reliable Posterior Probability can be predicted through ensured conditionality on the evidence despite the lack of data. Thus, this study is to measure the Expected Default Frequency (EDF) by utilizing the Bayesian Probabilistic Approach with the existing insolvency predictive probability model and predict the accuracy by comparing the result with the EDF of the existing model.

Chronological Changes of Soil Organic Carbon from 2003 to 2010 in Korea

  • Kim, Yoo Hak;Kang, Seong Soo;Kong, Myung Suk;Kim, Myung Sook;Sonn, Yeon Kyu;Chae, Mi Jin;Lee, Chang Hoon
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.47 no.3
    • /
    • pp.205-212
    • /
    • 2014
  • Chronological changes of soil organic carbon (SOC) must be prepared by IPCC guidelines for national greenhouse gas inventories. IPCC suggested default reference SOC stocks for mineral soils and relative stock factors for different management activities where country own factors were not prepared. 3.4 million data were downloaded from agricultural soil information system and analyzed to get chronological changes of SOC for some counties and for land use in Korea. SOC content of orchard soil was higher than the other soils but chronological SOC changes of all land use had no tendency in differences with high standard deviation. SOC contents of counties depended on their own management activities and chronological SOC changes of districts also had no tendency in differences. Thus, Korea should survey the official records and relative stock factors on management activities such as land use, tillage and input of organic matter to calculate SOC stocks correctly. Otherwise, Korea should establish a model for predicting SOC by analyzing selected representative fields and by calculating SOC differences from comparing management activities of lands with those of representative fields.