The Advancement of Underwriting Skill by Selective Risk Acceptance (보험Risk 세분화를 통한 언더라이팅 기법 선진화 방안)
-
- The Journal of the Korean life insurance medical association
- /
- v.24
- /
- pp.49-78
- /
- 2005
Ⅰ. 연구(硏究) 배경(背景) 및 목적(目的) o 우리나라 보험시장의 세대가입율은 86%로 보험시장 성숙기에 진입하였으며 기존의 전통적인 전업채널에서 방카슈랑스의 도입, 온라인전문보험사의 출현, TM 영업의 성장세 等멀티채널로 진행되고 있음 o LTC(장기간병), CI(치명적질환), 실손의료보험 등(等)선 진형 건강상품의 잇따른 출시로 보험리스크 관리측면에서 언더라이팅의 대비가 절실한 시점임 o 상품과 마케팅 等언더라이팅 측면에서 매우 밀접한 영역의 변화에 발맞추어 언더라이팅의 인수기법의 선진화가 시급히 요구되는 상황하에서 위험을 적절히 분류하고 평가하는 선진적 언더라이팅 기법 구축이 필수 적임 o 궁극적으로 고객의 다양한 보장니드 충족과 상품, 마케팅, 언더라이팅의 경쟁력 강화를 통한 보험사의 종합이익 극대화에 기여할 수 있는 방안을 모색하고자 함 Ⅱ. 선진보험시장(先進保險市場)Risk 세분화사례(細分化事例) 1. 환경적위험(環境的危險)에 따른 보험료(保險料) 차등(差等) (1) 위험직업 보험료 할증 o 미국, 유럽등(等) 대부분의 선진시장에서는 가입당시 피보험자의 직업위험도에 따라 보험료를 차등 적용중(中)임 o 가입하는 보장급부에 따라 직업 분류방법 및 할증방식도 상이하며 일반사망과 재해사망,납입면제, DI에 대해서 별도의 방법을 사용함 o 할증적용은 표준위험율의 일정배수를 적용하여 할증 보험료를 산출하거나, 가입금액당 일정한 추가보험료를 적용하고 있음 - 광부의 경우 재해사망 가입시 표준위험율의 300% 적용하며, 일반사망 가입시 $1,000당 $2.95 할증보험료 부가 (2) 위험취미 보험료 할증 o 취미와 관련 사고의 지속적 다발로 취미활동도 위험요소로 인식되어 보험료를 차등 적용중(中)임 o 할증보험료는 보험가입금액당 일정비율로 부가(가입 금액과 무관)하며, 신종레포츠 등(等)일부 위험취미는 통계의 부족으로 언더라이터가 할증율 결정하여 적용함 - 패러글라이딩 년(年)
Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.