• Title/Summary/Keyword: default

Search Result 705, Processing Time 0.023 seconds

A Systematic Analysis on Default Risk Based on Delinquency Probability

  • Kim, Gyoung Sun;Shin, Seung Woo
    • Korea Real Estate Review
    • /
    • v.28 no.3
    • /
    • pp.21-35
    • /
    • 2018
  • The recent performance of residential mortgages demonstrated how default risk operated separately from prepayment risk. In this study, we investigated the determinants of the borrowers' decisions pertaining to early termination through default from the mortgage performance data released by Freddie Mac, involving securitized mortgage loans from January 2011 to September 2013. We estimated a Cox-type, proportional hazard model with a single risk on fundamental factors associated with default options for individual mortgages. We proposed a mortgage default model that included two specifications of delinquency: one using a delinquency binary variable, while the other using a delinquency probability. We also compared the results obtained from two specifications with respect to goodness-of-fit proposed in the spirit of Vuong (1989) in both overlapping and nested models' cases. We found that a model with our proposed delinquency probability variable showed a statistically significant advantage compared to a benchmark model with delinquency dummy variables. We performed a default prediction power test based on the method proposed in Shumway (2001), and found a much stronger performance from the proposed model.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Performance Evaluation and Forecasting Model for Retail Institutions (유통업체의 부실예측모형 개선에 관한 연구)

  • Kim, Jung-Uk
    • Journal of Distribution Science
    • /
    • v.12 no.11
    • /
    • pp.77-83
    • /
    • 2014
  • Purpose - The National Agricultural Cooperative Federation of Korea and National Fisheries Cooperative Federation of Korea have prosecuted both financial and retail businesses. As cooperatives are public institutions and receive government support, their sound management is required by the Financial Supervisory Service in Korea. This is mainly managed by CAEL, which is changed by CAMEL. However, NFFC's business section, managing the finance and retail businesses, is unified and evaluated; the CAEL model has an insufficient classification to evaluate the retail industry. First, there is discrimination power as regards CAEL. Although the retail business sector union can receive a higher rating on a CAEL model, defaults have often been reported. Therefore, a default prediction model is needed to support a CAEL model. As we have the default prediction model using a subdivision of indexes and statistical methods, it can be useful to have a prevention function through the estimation of the retail sector's default probability. Second, separating the difference between the finance and retail business sectors is necessary. Their businesses have different characteristics. Based on various management indexes that have been systematically managed by the National Fisheries Cooperative Federation of Korea, our model predicts retail default, and is better than the CAEL model in its failure prediction because it has various discriminative financial ratios reflecting the retail industry situation. Research design, data, and methodology - The model to predict retail default was presented using logistic analysis. To develop the predictive model, we use the retail financial statements of the NFCF. We consider 93 unions each year from 2006 to 2012 to select confident management indexes. We also adapted the statistical power analysis that is a t-test, logit analysis, AR (accuracy ratio), and AUROC (Area Under Receiver Operating Characteristic) analysis. Finally, through the multivariate logistic model, we show that it is excellent in its discrimination power and higher in its hit ratio for default prediction. We also evaluate its usefulness. Results - The statistical power analysis using the AR (AUROC) method on the short term model shows that the logistic model has excellent discrimination power, with 84.6%. Further, it is higher in its hit ratio for failure (prediction) of total model, at 94%, indicating that it is temporally stable and useful for evaluating the management status of retail institutions. Conclusions - This model is useful for evaluating the management status of retail union institutions. First, subdividing CAEL evaluation is required. The existing CAEL evaluation is underdeveloped, and discrimination power falls. Second, efforts to develop a varied and rational management index are continuously required. An index reflecting retail industry characteristics needs to be developed. However, extending this study will need the following. First, it will require a complementary default model reflecting size differences. Second, in the case of small and medium retail, it will need non-financial information. Therefore, it will be a hybrid default model reflecting financial and non-financial information.

TeGCN:Transformer-embedded Graph Neural Network for Thin-filer default prediction (TeGCN:씬파일러 신용평가를 위한 트랜스포머 임베딩 기반 그래프 신경망 구조 개발)

  • Seongsu Kim;Junho Bae;Juhyeon Lee;Heejoo Jung;Hee-Woong Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.419-437
    • /
    • 2023
  • As the number of thin filers in Korea surpasses 12 million, there is a growing interest in enhancing the accuracy of assessing their credit default risk to generate additional revenue. Specifically, researchers are actively pursuing the development of default prediction models using machine learning and deep learning algorithms, in contrast to traditional statistical default prediction methods, which struggle to capture nonlinearity. Among these efforts, Graph Neural Network (GNN) architecture is noteworthy for predicting default in situations with limited data on thin filers. This is due to their ability to incorporate network information between borrowers alongside conventional credit-related data. However, prior research employing graph neural networks has faced limitations in effectively handling diverse categorical variables present in credit information. In this study, we introduce the Transformer embedded Graph Convolutional Network (TeGCN), which aims to address these limitations and enable effective default prediction for thin filers. TeGCN combines the TabTransformer, capable of extracting contextual information from categorical variables, with the Graph Convolutional Network, which captures network information between borrowers. Our TeGCN model surpasses the baseline model's performance across both the general borrower dataset and the thin filer dataset. Specially, our model performs outstanding results in thin filer default prediction. This study achieves high default prediction accuracy by a model structure tailored to characteristics of credit information containing numerous categorical variables, especially in the context of thin filers with limited data. Our study can contribute to resolving the financial exclusion issues faced by thin filers and facilitate additional revenue within the financial industry.

Optimum Reserves in Vietnam Based on the Approach of Cost-Benefit for Holding Reserves and Sovereign Risk

  • TRAN, Thinh Vuong;LE, Thao Phan Thi Dieu
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.7 no.3
    • /
    • pp.157-165
    • /
    • 2020
  • This paper estimates the optimum level of reserves in Vietnam based on the approach of reserves' cost-benefit and sovereign risk which is one of developing countries' characteristics. The cost of reserves is the opportunity cost when holding reserves. The benefit of reserves is the loss due to country's default in case that there is no reserves to finance external debt payment. The optimum reserves is found out by minimizing the total of opportunity cost and loss due to country's default with the probability of default. Through the usage of HP Filter method for calculating the loss due to country's default, ARDL regression for the risk premium model and lending rate of VND as proxy for opportunity cost together with the Vietnamese economic data in the period of 2005 - 2017, the empirical results show that the optimum reserves in Vietnam is almost higher than the actual reserves during the research period except the point of Q3/2008 and the last point of research period - Q4/2017. Therefore, Vietnam should continue to increase reserves for safety but Vietnam does not need pushing quickly the speed of increasing reserves. In addition, controlling Vietnamese optimum reserves is necessary to help the actual reserves become reasonable.

Financial Distress Prediction Using Adaboost and Bagging in Pakistan Stock Exchange

  • TUNIO, Fayaz Hussain;DING, Yi;AGHA, Amad Nabi;AGHA, Kinza;PANHWAR, Hafeez Ur Rehman Zubair
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.8 no.1
    • /
    • pp.665-673
    • /
    • 2021
  • Default has become an extreme concern in the current world due to the financial crisis. The previous prediction of companies' bankruptcy exhibits evidence of decision assistance for financial and regulatory bodies. Notwithstanding numerous advanced approaches, this area of study is not outmoded and requires additional research. The purpose of this research is to find the best classifier to detect a company's default risk and bankruptcy. This study used secondary data from the Pakistan Stock Exchange (PSX) and it is time-series data to examine the impact on the determinants. This research examined several different classifiers as per their competence to properly categorize default and non-default Pakistani companies listed on the PSX. Additionally, PSX has remained consistent for some years in terms of growth and has provided benefits to its stockholders. This paper utilizes machine learning techniques to predict financial distress in companies listed on the PSX. Our results indicate that most multi-stage mixture of classifiers provided noteworthy developments over the individual classifiers. This means that firms will have to work on the financial variables such as liquidity and profitability to not fall into the category of liquidation. Moreover, Adaptive Boosting (Adaboost) provides a significant boost in the performance of each classifier.

Application of Statistical Models for Default Probability of Loans in Mortgage Companies

  • Jung, Jin-Whan
    • Communications for Statistical Applications and Methods
    • /
    • v.7 no.2
    • /
    • pp.605-616
    • /
    • 2000
  • Three primary interests frequently raised by mortgage companies are introduced and the corresponding statistical approaches for the default probability in mortgage companies are examined. Statistical models considered in this paper are time series, logistic regression, decision tree, neural network, and discrete time models. Usage of the models is illustrated using an artificially modified data set and the corresponding models are evaluated in appropriate manners.

  • PDF

Improving the Performance of Statistical Context-Sensitive Spelling Error Correction Techniques Using Default Operation Algorithm (Default 연산 알고리즘을 적용한 통계적 문맥의존 철자오류 교정 기법의 성능 향상)

  • Lee, Jung-Hun;Kim, Minho;Kwon, Hyuk-Chul
    • 한국어정보학회:학술대회논문집
    • /
    • 2016.10a
    • /
    • pp.165-170
    • /
    • 2016
  • 본 논문에서 제안하는 문맥의존 철자오류 교정은 통계 정보를 이용한 방법으로 통계적 언어처리에서 가장 널리 쓰이는 샤논(Shannon)이 발표한 노이지 채널 모형(noisy channel model)을 기반으로 한다. 선행연구에서 부족하였던 부분의 성능 향상을 위해 교정대상단어의 오류생성 및 통계 데이터의 저장 방식을 개선하여 Default 연산을 적용한 모델을 제안한다. 선행 연구의 모델은 교정대상단어의 오류생성 시 편집거리의 제약을 1로 하여 교정 실험을 하지만 제안한 모델은 같은 환경에서 더욱 높은 검출과 정확도를 보였으며, 오류단어의 편집거리(edit distance) 제약을 넓게 적용하더라도 신뢰도가 있는 검출과 교정을 보였다.

  • PDF

Bayesian Inference for Predicting the Default Rate Using the Power Prior

  • Kim, Seong-W.;Son, Young-Sook;Choi, Sang-A
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.3
    • /
    • pp.685-699
    • /
    • 2006
  • Commercial banks and other related areas have developed internal models to better quantify their financial risks. Since an appropriate credit risk model plays a very important role in the risk management at financial institutions, it needs more accurate model which forecasts the credit losses, and statistical inference on that model is required. In this paper, we propose a new method for estimating a default rate. It is a Bayesian approach using the power prior which allows for incorporating of historical data to estimate the default rate. Inference on current data could be more reliable if there exist similar data based on previous studies. Ibrahim and Chen (2000) utilize these data to characterize the power prior. It allows for incorporating of historical data to estimate the parameters in the models. We demonstrate our methodologies with a real data set regarding SOHO data and also perform a simulation study.

A Multiple Test of a Poisson Mean Parameter Using Default Bayes Factors (디폴트 베이즈인자를 이용한 포아송 평균모수에 대한 다중검정)

  • 김경숙;손영숙
    • Journal of Korean Society for Quality Management
    • /
    • v.30 no.2
    • /
    • pp.118-129
    • /
    • 2002
  • A multiple test of a mean parameter, λ, in the Poisson model is considered using the Bayes factor. Under noninformative improper priors, the intrinsic Bayes factor(IBF) of Berger and Pericchi(1996) and the fractional Bayes factor(FBF) of O'Hagan(1995) called as the default or automatic Bayes factors are used to select one among three models, M$_1$: λ< $λ_0, M$_2$: λ= $λ_0, M$_3$: λ> $λ_0. Posterior probability of each competitive model is computed using the default Bayes factors. Finally, theoretical results are applied to simulated data and real data.