• Title/Summary/Keyword: Credit Splitting

Search Result 6, Processing Time 0.022 seconds

Exclusion of Housewives in the National Pension Plan in South Korea and Suggestions for Improvement (주부의 연금수급권의 문제점과 개선방안 : 우리나라 국민연금제도에 기초하여)

  • 문숙재;윤소영;최자경
    • Journal of Families and Better Life
    • /
    • v.21 no.1
    • /
    • pp.61-72
    • /
    • 2003
  • This study examines the problematic fact that most housewives are excluded from receiving the benefits of the National Pension Plan in South Korea. The National Pension Plan assigns no value to housework or household production, which in turn discourages full-time housewives from participating voluntarily in the Plan. In this article, I propose to utilize Credit Splitting and Pension Sharing in order to take into account full-time housewives' economic contribution in the National Pension Plan. In this article, I also discuss the ranges and application methods of the Credit Splitting and Pension Sharing. For this study, I have analyzed the data of 11,967 unemployed married women living with spouses published in“Research Data on Everyday Life Time Usage”by the Korea National Statistical Office in 1999. The value of the full-time housewives' labor varies depending on the methods of estimation. However, all estimated values exceed the average value assigned to the housewives by the National Pension Corporation. It is clear that full-time housewives' unpaid labor contributes a great deal to the formation of household property and wealth, which is a valid reason for Pension Sharing and Credit Splitting. This article also provides logical factors to consider in the process of Pension Sharing and Credit Splitting, which can be used for developing computerized software to determine a full-time housewives' labor value at the time of divorce or for any other purpose.

Tree-structured Clustering for Mixed Data (혼합형 데이터에 대한 나무형 군집화)

  • Yang Kyung-Sook;Huh Myung-Hoe
    • The Korean Journal of Applied Statistics
    • /
    • v.19 no.2
    • /
    • pp.271-282
    • /
    • 2006
  • The aim of this study is to propose a tree-structured clustering for mixed data. We suggest a scaling method to reduce the variable selection bias among categorical variables. In numerical examples such as credit data, German credit data, we note several differences between tree-structured clustering and K-means clustering.

Importance sampling with splitting for portfolio credit risk

  • Kim, Jinyoung;Kim, Sunggon
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.3
    • /
    • pp.327-347
    • /
    • 2020
  • We consider a credit portfolio with highly skewed exposures. In the portfolio, small number of obligors have very high exposures compared to the others. For the Bernoulli mixture model with highly skewed exposures, we propose a new importance sampling scheme to estimate the tail loss probability over a threshold and the corresponding expected shortfall. We stratify the sample space of the default events into two subsets. One consists of the events that the obligors with heavy exposures default simultaneously. We expect that typical tail loss events belong to the set. In our proposed scheme, the tail loss probability and the expected shortfall corresponding to this type of events are estimated by a conditional Monte Carlo, which results in variance reduction. We analyze the properties of the proposed scheme mathematically. In numerical study, the performance of the proposed scheme is compared with an existing importance sampling method.

Improved Decision Tree Algorithms by Considering Variables Interaction (교호효과를 고려한 향상된 의사결정나무 알고리듬에 관한 연구)

  • Kwon, Keunseob;Choi, Gyunghyun
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.30 no.4
    • /
    • pp.267-276
    • /
    • 2004
  • Much of previous attention on researches of the decision tree focuses on the splitting criteria and optimization of tree size. Nowadays the quantity of the data increase and relation of variables becomes very complex. And hence, this comes to have plenty number of unnecessary node and leaf. Consequently the confidence of the explanation and forecasting of the decision tree falls off. In this research report, we propose some decision tree algorithms considering the interaction of predictor variables. A generic algorithm, the k-1 Algorithm, dealing with the interaction with a combination of all predictor variable is presented. And then, the extended version k-k Algorithm which considers with the interaction every k-depth with a combination of some predictor variables. Also, we present an improved algorithm by introducing control parameter to the algorithms. The algorithms are tested by real field credit card data, census data, bank data, etc.

An Experimental Study on the Pullout Failure Behavior of Post-installed Concrete Set Anchor (후설치 콘크리트 세트앵커의 인발파괴거동에 관한 실험적 연구)

  • Suth, Ratha;Yoo, Seung-Woon
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.18 no.1
    • /
    • pp.40-47
    • /
    • 2014
  • Recently the use of concrete post-installed set anchors has been increasing because this constructing method is flexible and easy to attach or fix structural members when we repair, reinforce, or remodel structures. Accordingly, designers and builders of Korea depend on foreign design codes since there are no exact domestic anchor design codes that they could credit. The anchor in plain concrete loaded in tensile exhibits various failure modes such as concrete breakout, splitting, steel failure, pull-out and side-face blowout, that depending on the tensile strength of the steel, the strength of concrete, embedment depth, interval, the edge distance and the presence of adjacent anchor. The objective is to investigate the effects of the variations like anchor embedment depth, interval and edge distance on pull-out fracture behavior of post-installed concrete set anchor embedded in plain concrete.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.