• Title/Summary/Keyword: 금융정보시스템

Search Result 578, Processing Time 0.027 seconds

The Effects of CEO's Narcissism on Diversification Strategy and Performance in an Economic Downturn: The Moderating Role of Corporate Governance System (경기침체기의 다각화전략과 성과에 대한 최고경영자 나르시시즘의 영향과 기업지배구조의 조절효과에 대한 연구)

  • Yoo, Jae-Wook
    • Management & Information Systems Review
    • /
    • v.35 no.4
    • /
    • pp.1-19
    • /
    • 2016
  • The researchers in strategic management have focused on identifying the effects of CEO's demographic characteristics and experience on the strategic choices and performance of firms. On the other hand, they have failed to identifying the effects of CEO's psychological characteristics on them because of the difficulties over data collection and measurement for variables. To overcome this limitation of prior researches, this study is designed to achieve two specific objectives. The first is to examine the effect of CEO narcissism on diversification strategy and performance of listed corporations on Korean securities market in an economic downturn. The other is to examine the moderating effects of various corporate governance systems that are related to board and/or ownership structures on those relationships. The empirical setting for this study was drawn from a multi-year(2011~2014) sample of large listed corporations in Korean securities market. To achieve the objectives, the hypotheses of research are analyzed by implementing multiple regression analyses in two separate models. The results of these analyses show that CEO narcissism is positively related to the diversification of listed large corporations in Korean securities market. Regrading the moderating effects, the stake of institutional investors weakens the positive relationship between CEO narcissism and firm's diversification. The findings of this research imply that CEO narcissism can intensify the tendency of Korean corporations to adopt high-risk and high return strategy in an economic downturn. Thus, firms might be able to use CEO narcissism to drastically restructure the business portfolio in an economic downturn. However, Korean corporations should be very cautions to maximize the positive effect of CEO narcissism. They might be use the institutional investors as their corporate governance system to monitor and control the opportunism of CEO in the decision for diversification in an economic downturn.

  • PDF

Empirical Analysis on Bitcoin Price Change by Consumer, Industry and Macro-Economy Variables (비트코인 가격 변화에 관한 실증분석: 소비자, 산업, 그리고 거시변수를 중심으로)

  • Lee, Junsik;Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.195-220
    • /
    • 2018
  • In this study, we conducted an empirical analysis of the factors that affect the change of Bitcoin Closing Price. Previous studies have focused on the security of the block chain system, the economic ripple effects caused by the cryptocurrency, legal implications and the acceptance to consumer about cryptocurrency. In various area, cryptocurrency was studied and many researcher and people including government, regardless of country, try to utilize cryptocurrency and applicate to its technology. Despite of rapid and dramatic change of cryptocurrencies' price and growth of its effects, empirical study of the factors affecting the price change of cryptocurrency was lack. There were only a few limited studies, business reports and short working paper. Therefore, it is necessary to determine what factors effect on the change of closing Bitcoin price. For analysis, hypotheses were constructed from three dimensions of consumer, industry, and macroeconomics for analysis, and time series data were collected for variables of each dimension. Consumer variables consist of search traffic of Bitcoin, search traffic of bitcoin ban, search traffic of ransomware and search traffic of war. Industry variables were composed GPU vendors' stock price and memory vendors' stock price. Macro-economy variables were contemplated such as U.S. dollar index futures, FOMC policy interest rates, WTI crude oil price. Using above variables, we did times series regression analysis to find relationship between those variables and change of Bitcoin Closing Price. Before the regression analysis to confirm the relationship between change of Bitcoin Closing Price and the other variables, we performed the Unit-root test to verifying the stationary of time series data to avoid spurious regression. Then, using a stationary data, we did the regression analysis. As a result of the analysis, we found that the change of Bitcoin Closing Price has negative effects with search traffic of 'Bitcoin Ban' and US dollar index futures, while change of GPU vendors' stock price and change of WTI crude oil price showed positive effects. In case of 'Bitcoin Ban', it is directly determining the maintenance or abolition of Bitcoin trade, that's why consumer reacted sensitively and effected on change of Bitcoin Closing Price. GPU is raw material of Bitcoin mining. Generally, increasing of companies' stock price means the growth of the sales of those companies' products and services. GPU's demands increases are indirectly reflected to the GPU vendors' stock price. Making an interpretation, a rise in prices of GPU has put a crimp on the mining of Bitcoin. Consequently, GPU vendors' stock price effects on change of Bitcoin Closing Price. And we confirmed U.S. dollar index futures moved in the opposite direction with change of Bitcoin Closing Price. It moved like Gold. Gold was considered as a safe asset to consumers and it means consumer think that Bitcoin is a safe asset. On the other hand, WTI oil price went Bitcoin Closing Price's way. It implies that Bitcoin are regarded to investment asset like raw materials market's product. The variables that were not significant in the analysis were search traffic of bitcoin, search traffic of ransomware, search traffic of war, memory vendor's stock price, FOMC policy interest rates. In search traffic of bitcoin, we judged that interest in Bitcoin did not lead to purchase of Bitcoin. It means search traffic of Bitcoin didn't reflect all of Bitcoin's demand. So, it implies there are some factors that regulate and mediate the Bitcoin purchase. In search traffic of ransomware, it is hard to say concern of ransomware determined the whole Bitcoin demand. Because only a few people damaged by ransomware and the percentage of hackers requiring Bitcoins was low. Also, its information security problem is events not continuous issues. Search traffic of war was not significant. Like stock market, generally it has negative in relation to war, but exceptional case like Gulf war, it moves stakeholders' profits and environment. We think that this is the same case. In memory vendor stock price, this is because memory vendors' flagship products were not VRAM which is essential for Bitcoin supply. In FOMC policy interest rates, when the interest rate is low, the surplus capital is invested in securities such as stocks. But Bitcoin' price fluctuation was large so it is not recognized as an attractive commodity to the consumers. In addition, unlike the stock market, Bitcoin doesn't have any safety policy such as Circuit breakers and Sidecar. Through this study, we verified what factors effect on change of Bitcoin Closing Price, and interpreted why such change happened. In addition, establishing the characteristics of Bitcoin as a safe asset and investment asset, we provide a guide how consumer, financial institution and government organization approach to the cryptocurrency. Moreover, corroborating the factors affecting change of Bitcoin Closing Price, researcher will get some clue and qualification which factors have to be considered in hereafter cryptocurrency study.

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

The Advancement of Underwriting Skill by Selective Risk Acceptance (보험Risk 세분화를 통한 언더라이팅 기법 선진화 방안)

  • Lee, Chan-Hee
    • The Journal of the Korean life insurance medical association
    • /
    • v.24
    • /
    • pp.49-78
    • /
    • 2005
  • Ⅰ. 연구(硏究) 배경(背景) 및 목적(目的) o 우리나라 보험시장의 세대가입율은 86%로 보험시장 성숙기에 진입하였으며 기존의 전통적인 전업채널에서 방카슈랑스의 도입, 온라인전문보험사의 출현, TM 영업의 성장세 等멀티채널로 진행되고 있음 o LTC(장기간병), CI(치명적질환), 실손의료보험 등(等)선 진형 건강상품의 잇따른 출시로 보험리스크 관리측면에서 언더라이팅의 대비가 절실한 시점임 o 상품과 마케팅 等언더라이팅 측면에서 매우 밀접한 영역의 변화에 발맞추어 언더라이팅의 인수기법의 선진화가 시급히 요구되는 상황하에서 위험을 적절히 분류하고 평가하는 선진적 언더라이팅 기법 구축이 필수 적임 o 궁극적으로 고객의 다양한 보장니드 충족과 상품, 마케팅, 언더라이팅의 경쟁력 강화를 통한 보험사의 종합이익 극대화에 기여할 수 있는 방안을 모색하고자 함 Ⅱ. 선진보험시장(先進保險市場)Risk 세분화사례(細分化事例) 1. 환경적위험(環境的危險)에 따른 보험료(保險料) 차등(差等) (1) 위험직업 보험료 할증 o 미국, 유럽등(等) 대부분의 선진시장에서는 가입당시 피보험자의 직업위험도에 따라 보험료를 차등 적용중(中)임 o 가입하는 보장급부에 따라 직업 분류방법 및 할증방식도 상이하며 일반사망과 재해사망,납입면제, DI에 대해서 별도의 방법을 사용함 o 할증적용은 표준위험율의 일정배수를 적용하여 할증 보험료를 산출하거나, 가입금액당 일정한 추가보험료를 적용하고 있음 - 광부의 경우 재해사망 가입시 표준위험율의 300% 적용하며, 일반사망 가입시 $1,000당 $2.95 할증보험료 부가 (2) 위험취미 보험료 할증 o 취미와 관련 사고의 지속적 다발로 취미활동도 위험요소로 인식되어 보험료를 차등 적용중(中)임 o 할증보험료는 보험가입금액당 일정비율로 부가(가입 금액과 무관)하며, 신종레포츠 등(等)일부 위험취미는 통계의 부족으로 언더라이터가 할증율 결정하여 적용함 - 패러글라이딩 년(年)$26{\sim}50$회(回) 취미생활의 경우 가입금액 $1,000당 재해사망 $2, DI보험 8$ 할증보험료 부가 o 보험료 할증과는 별도로 위험취미에 대한 부담보를 적용함. 위험취미 활동으로 인한 보험사고 발생시 사망을 포함한 모든 급부에 대한 보장을 부(不)담보로 인수함. (3) 위험지역 거주/ 여행 보험료 할증 o 피보험자가 거주하고 있는 특정국가의 임시 혹은 영구적 거주시 기후위험, 거주지역의 위생과 의료수준, 여행위험, 전쟁과 폭동위험 등(等)을 고려하여 평가 o 일반사망, 재해사망 등(等)보장급부별로 할증보험료 부가 또는 거절 o 할증보험료는 보험全기간에 대해 동일하게 적용 - 러시아의 경우 가입금액 $1,000당 일반사망은 2$의 할증보험료 부가, 재해사망은 거절 (4) 기타 위험도에 대한 보험료 차등 o 비행관련 위험은 세가지로 분류(항공운송기, 개인비행, 군사비행), 청약서, 추가질문서, 진단서, 비행이력 정보를 바탕으로 할증보험료를 부가함 - 농약살포비행기조종사의 경우 가입금액 $1,000당 일반사망 6$의 할증보험료 부가, 재해사망은 거절 o 미국, 일본등(等)서는 교통사고나 교통위반 관련 기록을 활용하여 무(無)사고운전자에 대해 보험료 할인(우량체 위험요소로 활용) 2. 신체적위험도(身體的危險度)에 따른 보험료차등(保險料差等) (1) 표준미달체 보험료 할증 1) 총위험지수 500(초과위험지수 400)까지 인수 o 300이하는 25점단위, 300점 초과는 50점 단위로 13단계로 구분하여 할증보험료를 적용중(中)임 2) 삭감법과 할증법을 동시 적용 o 보험금 삭감부분만큼 할증보험료가 감소하는 효과가 있어 청약자에게 선택의 기회를 제공할수 있으며 고(高)위험 피보험자에게 유용함 3) 특정암에 대한 기왕력자에 대해 단기(Temporary)할증 적용 o 질병성향에 따라 가입후 $1{\sim}5$년간 할증보험료를 부가하고 보험료 할증 기간이 경과한 후에는 표준체보험료를 부가함 4) 할증보험료 반환옵션(Return of the extra premium)의 적용 o 보험계약이 유지중(中)이며, 일정기간 생존시 할증보험료가 반환됨 (2) 표준미달체 급부증액(Enhanced annuity) o 영국에서는 표준미달체를 대상으로 연금급부를 증가시킨 증액형 연금(Enhanced annuity) 상품을 개발 판매중(中)임 o 흡연, 직업, 병력 등(等)다양한 신체적, 환경적 위험도에 따라 표준체에 비해 증액연금을 차등 지급함 (3) 우량 피보험체 가격 세분화 o 미국시장에서는 $8{\sim}14$개 의적, 비(非)의적 위험요소에 대한 평가기준에 따라 표준체를 최대 8개 Class로 분류하여 할인보험료를 차등 적용 - 기왕력, 혈압, 가족력, 흡연, BMI, 콜레스테롤, 운전, 위험취미, 거주지, 비행력, 음주/마약 등(等) o 할인율은 회사, Class, 가입기준에 따라 상이(최대75%)하며, 가입연령은 최저 $16{\sim}20$세, 최대 $65{\sim}75$세, 최저보험금액은 10만달러(HIV검사가 필요한 최저 금액) o 일본시장에서는 $3{\sim}4$개 위험요소에 따라 $3{\sim}4$개 Class로 분류 우량체 할인중(中)임 o 유럽시장에서는 영국 등(等)일부시장에서만 비(非)흡연할인 또는 우량체할인 적용 Ⅲ. 국내보험시장(國內保險市場) 현황(現況)및 문제점(問題點) 1. 환경적위험도(環境的危險度)에 따른 가입한도제한(加入限度制限) (1) 위험직업 보험가입 제한 o 업계공동의 직업별 표준위험등급에 따라 각 보험사 자체적으로 위험등급별 가입한도를 설정 운영중(中)임. 비(非)위험직과의 형평성, 고(高)위험직업 보장 한계, 수익구조 불안정화 등(等)문제점을 내포하고 있음 - 광부의 경우 위험1급 적용으로 사망 최대 1억(億), 입원 1일(日) 2만원까지 제한 o 금융감독원이 2002년(年)7월(月)위험등급별 위험지수를 참조 위험율로 인가하였으나, 비위험직은 70%, 위험직은 200% 수준으로 산정되어 현실적 적용이 어려움 (2) 위험취미 보험가입 제한 o 해당취미의 직업종사자에 준(準)하여 직업위험등급을 적용하여 가입 한도를 제한하고 있음. 추가질문서를 활용하여 자격증 유무, 동호회 가입등(等)에 대한 세부정보를 입수하지 않음 - 패러글라이딩의 경우 위험2급을 적용, 사망보장 최대 2 억(億)까지 제한 (3) 거주지역/ 해외여행 보험가입 제한 o 각(各)보험사별로 지역적 특성상 사고재해 다발 지역에 대해 보험가입을 제한하고 있음 - 강원, 충청 일부지역 상해보험 가입불가 - 전북, 태백 일부지역 입원급여금 1일(日)2만원이내 o 해외여행을 포함한 해외체류에 대해서는 일정한 가입 요건을 정하여 운영중(中)이며, 가입한도 설정 보험가입을 제한하거나 재해집중보장 상품에 대해 거절함 - 러시아의 경우 단기체류는 위험1급 및 상해보험 가입 불가, 장기 체류는 거절처리함 2. 신체적위험도(身體的危險度)에 따른 인수차별화(引受差別化) (1) 표준미달체 인수방법 o 체증성, 항상성 위험에 대한 초과위험지수를 보험금삭감법으로 전환 사망보험에 적용(최대 5년(年))하여 5년(年)이후 보험 Risk노출 심각 o 보험료 할증은 일부 회사에서 주(主)보험 중심으로 사용중(中)이며, 총위험지수 300(8단계)까지 인수 - 주(主)보험 할증시 특약은 가입 불가하며, 암 기왕력자는 대부분 거절 o 신체부위 39가지, 질병 5가지에 대해 부담보 적용(입원, 수술 등(等)생존급부에 부담보) (2) 비(非)흡연/ 우량체 보험료 할인 o 1999년(年)최초 도입 이래 $3{\sim}4$개의 위험요소로 1개 Class 운영중(中)임 S생보사의 경우 비(非)흡연우량체, 비(非)흡연표준체의 2개 Class 운영 o 보험료 할인율은 회사, 상품에 따라 상이하며 최대 22%(영업보험료기준)임. 흡연여부는 뇨스틱을 활용 코티닌테스트를 실시함 o 우량체 판매는 신계약의 $2{\sim}15%$수준(회사의 정책에 따라 상이) Ⅳ. 언더라이팅 기법(技法) 선진화(先進化) 방안(方案) 1. 직업위험도별 보험료 차등 적용 o 생 손보 직업위험등급 일원화와 연계하여 3개등급으로 위험지수개편, 비위험직 기준으로 보험요율 차별적용 2. 위험취미에 대한 부담보 적용 o 해당취미를 원인으로 보험사고(사망포함) 발생시 부담보 제도 도입 3. 표준미달체 인수기법 선진화를 통한 인수범위 대폭 확대 o 보험료 할증법 적용 확대를 통한 Risk 헷지로 총위험지수 $300{\rightarrow}500$으로 확대(거절건 최소화) 4. 보험료 할증법 보험금 삭감 병행 적용 o 삭감기간을 적용한 보험료 할증방식 개발, 고객에게 선택권 제공 5. 기한부 보험료할증 부가 o 위암, 갑상선암 등(等)특정암의 성향에 따라 위험도가 높은 가입초기에 평준할증보험료를 적용하여 인수 6. 보험료 할증법 부가특약 확대 적용, 부담보 병행 사용 o 정기특약 등(等)사망관련 특약에 할증법 확대, 생존급부 특약은 부담보 7. 표준체 고객 세분화 확대 o 콜레스테롤, HDL 등(等)위험평가요소 확대를 통한 Class 세분화 Ⅴ. 기대효과(期待效果) 1. 고(高)위험직종사자, 위험취미자, 표준미달체에 대한 보험가입 문호개방 2. 보험계약자간 형평성 제고 및 다양한 고객의 보장니드에 부응 3. 상품판매 확대 및 Risk헷지를 통한 수입보험료 증대 및 사차익 개선 4. 본격적인 가격경쟁에 대비한 보험사 체질 개선 5. 회사 이미지 제고 및 진단 거부감 해소, 포트폴리오 약화 방지 Ⅵ. 결론(結論) o 종래의 소극적이고 일률적인 인수기법에서 탈피하여 피보험자를 다양한 측면에서 위험평가하여 적정 보험료 부가와 합리적 가입조건을 제시하는 적절한 위험평가 수단을 도입하고, o 언더라이팅 인수기법의 선진화와 함께 언더라이팅 인력의 전문화, 정보입수 및 시스템 인프라의 구축 등이 병행함으로써, o 보험사의 사차손익 관리측면에서 뿐만 아니라 보험시장 개방 및 급변하는 보험환경에 대비한 한국 생보언더라이팅 경쟁력 강화 및 언더라이터의 글로벌화에도 크게 기여할 것임.

  • PDF