• Title/Summary/Keyword: AI.DATA

Search Result 2,283, Processing Time 0.034 seconds

Analysis of Success Cases of InsurTech and Digital Insurance Platform Based on Artificial Intelligence Technologies: Focused on Ping An Insurance Group Ltd. in China (인공지능 기술 기반 인슈어테크와 디지털보험플랫폼 성공사례 분석: 중국 평안보험그룹을 중심으로)

  • Lee, JaeWon;Oh, SangJin
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.71-90
    • /
    • 2020
  • Recently, the global insurance industry is rapidly developing digital transformation through the use of artificial intelligence technologies such as machine learning, natural language processing, and deep learning. As a result, more and more foreign insurers have achieved the success of artificial intelligence technology-based InsurTech and platform business, and Ping An Insurance Group Ltd., China's largest private company, is leading China's global fourth industrial revolution with remarkable achievements in InsurTech and Digital Platform as a result of its constant innovation, using 'finance and technology' and 'finance and ecosystem' as keywords for companies. In response, this study analyzed the InsurTech and platform business activities of Ping An Insurance Group Ltd. through the ser-M analysis model to provide strategic implications for revitalizing AI technology-based businesses of domestic insurers. The ser-M analysis model has been studied so that the vision and leadership of the CEO, the historical environment of the enterprise, the utilization of various resources, and the unique mechanism relationships can be interpreted in an integrated manner as a frame that can be interpreted in terms of the subject, environment, resource and mechanism. As a result of the case analysis, Ping An Insurance Group Ltd. has achieved cost reduction and customer service development by digitally innovating its entire business area such as sales, underwriting, claims, and loan service by utilizing core artificial intelligence technologies such as facial, voice, and facial expression recognition. In addition, "online data in China" and "the vast offline data and insights accumulated by the company" were combined with new technologies such as artificial intelligence and big data analysis to build a digital platform that integrates financial services and digital service businesses. Ping An Insurance Group Ltd. challenged constant innovation, and as of 2019, sales reached $155 billion, ranking seventh among all companies in the Global 2000 rankings selected by Forbes Magazine. Analyzing the background of the success of Ping An Insurance Group Ltd. from the perspective of ser-M, founder Mammingz quickly captured the development of digital technology, market competition and changes in population structure in the era of the fourth industrial revolution, and established a new vision and displayed an agile leadership of digital technology-focused. Based on the strong leadership led by the founder in response to environmental changes, the company has successfully led InsurTech and Platform Business through innovation of internal resources such as investment in artificial intelligence technology, securing excellent professionals, and strengthening big data capabilities, combining external absorption capabilities, and strategic alliances among various industries. Through this success story analysis of Ping An Insurance Group Ltd., the following implications can be given to domestic insurance companies that are preparing for digital transformation. First, CEOs of domestic companies also need to recognize the paradigm shift in industry due to the change in digital technology and quickly arm themselves with digital technology-oriented leadership to spearhead the digital transformation of enterprises. Second, the Korean government should urgently overhaul related laws and systems to further promote the use of data between different industries and provide drastic support such as deregulation, tax benefits and platform provision to help the domestic insurance industry secure global competitiveness. Third, Korean companies also need to make bolder investments in the development of artificial intelligence technology so that systematic securing of internal and external data, training of technical personnel, and patent applications can be expanded, and digital platforms should be quickly established so that diverse customer experiences can be integrated through learned artificial intelligence technology. Finally, since there may be limitations to generalization through a single case of an overseas insurance company, I hope that in the future, more extensive research will be conducted on various management strategies related to artificial intelligence technology by analyzing cases of multiple industries or multiple companies or conducting empirical research.

The Differences of Anthropometric and Polysomnographic Characteristics Between the Positional and Non-positional Obstructive Sleep Apnea Syndrome (체위 의존성 및 체위 비의존성 폐쇄성 수면 무호흡증후군의 신체계측인자 및 수면구조의 차이)

  • Park, Hye-Jung;Shin, Kyeong-Cheol;Lee, Choong-Kee;Chung, Jin-Hong;Lee, Kwan-Ho
    • Tuberculosis and Respiratory Diseases
    • /
    • v.48 no.6
    • /
    • pp.956-963
    • /
    • 2000
  • Backgrounds : Obstructive sleep apnea syndrome(OSA) can divided into two groups, positional(PP) and non-positional(NPP) obstructive sleep apnea syndrome, according to the body position while sleeping. In this study, we evaluated the differences of anthropometric data and polysomnographic recordings between the two types of sleep apnea syndrome. Materials : Fifty patients with OSA were divided two groups by Cartwright's criteria. The supine respiratory disturbance index (RDI) was at least two times higher than the lateral RDI in the PP group, and the supine RDI was less than twice the lateral RDI in the NPP group. This patients underwent standardized polysomnographic recordings. The anthropometric data and polysomnographic data were analyzed, statistically. Results : Of all 50 patients, 30% were found to be positional OSA. BMI was significantly higher in the PP group(p<0.05). Total sleep time was significantly longer in the PP group (350.6$\pm$28.2min, 333.3$\pm$46.0min, (p<0.05). Sleep efficiency was high in the PP group(89.6$\pm$6.4%, 85.6$\pm$9.9%, p<0.05). Deep sleep was significantly higher and light sleep was lower in the PP group than in the NPP group but no difference was observed in REM sleep between the two groups. Apnea index(AI) and RDI were significantly lower( 17.0$\pm$10.6, 28.5$\pm$13.3, p<0.05) and mean arterial oxygen saturation was higher in the PP group(92.7$\pm$1.8%. p<0.05) than in the NPP group. Conclusion : Body position during sleep has a profound effect on the frequency and severity of breathing abnormalities in OSA patients. A polysomnographic evaluation for suspected OSA patients must include monitoring of the body position. Breathing function in OSA patients can be improved by controlling their obesity and through postural therapy.

  • PDF

A Study on Trust Transfer in Traditional Fintech of Smart Banking (핀테크 서비스에서 오프라인에서 온라인으로의 신뢰전이에 관한 연구 - 스마트뱅킹을 중심으로 -)

  • Ai, Di;Kwon, Sun-Dong;Lee, Su-Chul;Ko, Mi-Hyun;Lee, Bo-Hyung
    • Management & Information Systems Review
    • /
    • v.36 no.3
    • /
    • pp.167-184
    • /
    • 2017
  • In this study, we investigated the effect of offline banking trust on smart banking trust. As influencing factors of smart banking trust, this study compared offline banking trust, smart banking's system quality, and information quality. For the empirical study, 186 questionnaire data were collected from smart banking users and the data were analyzed using Smart-PLS 2.0. As results, it was verified that there is trust transfer in FinTech service, by the significant effect of offline banking trust on smart banking trust. And it was proved that the effect of offline banking trust on smart banking trust is lower than that of smart banking itself. The contribution of this study can be seen in both academic and industrial aspects. First, it is the contribution of the academic aspect. Previous studies on banking were focused on either offline banking or smart banking. But this study, focus on the relationship between offline banking and online banking, proved that offline banking trust affects smart banking trust. Next, it is the industrial contribution. This study showed that offline banking characteristics of traditional commercial banks affect the trust of emerging smart banking service. This means that the emerging FinTech companies are not advantageous in the competition of trust building compared to traditional commercial banks. Unlike traditional commercial banks, the emerging FinTech is innovating the convenience of customers by arming them with new technologies such as mobile Internet, social network, cloud technology, and big data. However, these FinTech strengths alone can not guarantee sufficient trust needed for financial transactions, because banking customers do not change a habit or an inertia that they already have during using traditional banks. Therefore, emerging FinTech companies should strive to create destructive value that reflects the connection with various Internet services and the strength of online interaction such as social services, which have an advantage over customer contacts. And emerging FinTech companies should strive to build service trust, focused on young people with low resistance to new services.

  • PDF

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Development of the Regulatory Impact Analysis Framework for the Convergence Industry: Case Study on Regulatory Issues by Emerging Industry (융합산업 규제영향분석 프레임워크 개발: 신산업 분야별 규제이슈 사례 연구)

  • Song, Hye-Lim;Seo, Bong-Goon;Cho, Sung-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.199-230
    • /
    • 2021
  • Innovative new products and services are being launched through the convergence between heterogeneous industries, and social interest and investment in convergence industries such as AI, big data-based future cars, and robots are continuously increasing. However, in the process of commercialization of convergence new products and services, there are many cases where they do not conform to the existing regulatory and legal system, which causes many difficulties in companies launching their products and services into the market. In response to these industrial changes, the current government is promoting the improvement of existing regulatory mechanisms applied to the relevant industry along with the expansion of investment in new industries. This study, in these convergence industry trends, aimed to analysis the existing regulatory system that is an obstacle to market entry of innovative new products and services in order to preemptively predict regulatory issues that will arise in emerging industries. In addition, it was intended to establish a regulatory impact analysis system to evaluate adequacy and prepare improvement measures. The flow of this study is divided into three parts. In the first part, previous studies on regulatory impact analysis and evaluation systems are investigated. This was used as basic data for the development direction of the regulatory impact framework, indicators and items. In the second regulatory impact analysis framework development part, indicators and items are developed based on the previously investigated data, and these are applied to each stage of the framework. In the last part, a case study was presented to solve the regulatory issues faced by actual companies by applying the developed regulatory impact analysis framework. The case study included the autonomous/electric vehicle industry and the Internet of Things (IoT) industry, because it is one of the emerging industries that the Korean government is most interested in recently, and is judged to be most relevant to the realization of an intelligent information society. Specifically, the regulatory impact analysis framework proposed in this study consists of a total of five steps. The first step is to identify the industrial size of the target products and services, related policies, and regulatory issues. In the second stage, regulatory issues are discovered through review of regulatory improvement items for each stage of commercialization (planning, production, commercialization). In the next step, factors related to regulatory compliance costs are derived and costs incurred for existing regulatory compliance are calculated. In the fourth stage, an alternative is prepared by gathering opinions of the relevant industry and experts in the field, and the necessity, validity, and adequacy of the alternative are reviewed. Finally, in the final stage, the adopted alternatives are formulated so that they can be applied to the legislation, and the alternatives are reviewed by legal experts. The implications of this study are summarized as follows. From a theoretical point of view, it is meaningful in that it clearly presents a series of procedures for regulatory impact analysis as a framework. Although previous studies mainly discussed the importance and necessity of regulatory impact analysis, this study presented a systematic framework in consideration of the various factors required for regulatory impact analysis suggested by prior studies. From a practical point of view, this study has significance in that it was applied to actual regulatory issues based on the regulatory impact analysis framework proposed above. The results of this study show that proposals related to regulatory issues were submitted to government departments and finally the current law was revised, suggesting that the framework proposed in this study can be an effective way to resolve regulatory issues. It is expected that the regulatory impact analysis framework proposed in this study will be a meaningful guideline for technology policy researchers and policy makers in the future.

Investigation of Study Items for the Patterns of Care Study in the Radiotherapy of Laryngeal Cancer: Preliminary Results (후두암의 방사선치료 Patterns of Care Study를 위한 프로그램 항목 개발: 예비 결과)

  • Chung Woong-Ki;Kim I1-Han;Ahn Sung-Ja;Nam Taek-Keun;Oh Yoon-Kyeong;Song Ju-Young;Nah Byung-Sik;Chung Gyung-Ai;Kwon Hyoung-Cheol;Kim Jung-Soo;Kim Soo-Kon;Kang Jeong-Ku
    • Radiation Oncology Journal
    • /
    • v.21 no.4
    • /
    • pp.299-305
    • /
    • 2003
  • Purpose: In order to develop the national guide-lines for the standardization of radiotherapy we are planning to establish a web-based, on-line data-base system for laryngeal cancer. As a first step this study was performed to accumulate the basic clinical information of laryngeal cancer and to determine the items needed for the data-base system. Materials and Methods: We analyzed the clinical data on patients who were treated under the diagnosis of laryngeal cancer from January 1998 through December 1999 In the South-west area of Korea. Eligiblity criteria of the patients are as follows: 18 years or older, currently diagnosed with primary epithelial carcinoma of larynx, and no history of previous treatments for another cancers and the other laryngeal diseases. The items were developed and filled out by radiation oncologlst who are members of forean Southwest Radiation Oncology Group. SPSS vl0.0 software was used for statistical analysis. Results: Data of forty-five patients were collected. Age distribution of patients ranged from 28 to 88 years(median, 61). Laryngeal cancer occurred predominantly In males (10 : 1 sex ratio). Twenty-eight patients (62$\%$) had primary cancers in the glottis and 17 (38$\%$) in the supraglottis. Most of them were diagnosed pathologically as squamous cell carcinoma (44/45, 98$\%$). Twenty-four of 28 glottic cancer patients (86$\%$) had AJCC (American Joint Committee on Cancer) stage I/II, but 50$\%$ (8/16) had In supraglottic cancer patients (p=0.02). Most patients(89$\%$) had the symptom of hoarseness. indirect laryngoscopy was done in all patients and direct laryngoscopy was peformed in 43 (98$\%$) patients. Twenty-one of 28 (75$\%$) glottic cancer cases and 6 of 17 (35$\%$) supraglottic cancer cases were treated with radiation alone, respectively. The combined treatment of surgery and radiation was used in 5 (18$\%$) glottic and 8 (47$\%$) supraglottic patients. Chemotherapy and radiation was used in 2 (7$\%$) glottic and 3 (18$\%$) supraglottic patients. There was no statistically significant difference in the use of combined modality treatments between glottic and supraglottic cancers (p=0.20). In all patients, 6 MV X-ray was used with conventional fractionation. The iraction size was 2 Gy In 80$\%$ of glottic cancer patients compared with 1.8 Gy in 59$\%$ of the patients with supraglottic cancers. The mean total dose delivered to primary lesions were 65.98 ey and 70.15 Gy in glottic and supraglottic patients treated, respectively, with radiation alone. Based on the collected data, 12 modules with 90 items were developed or the study of the patterns of care In laryngeal cancer. Conclusion: The study Items for laryngeal cancer were developed. In the near future, a web system will be established based on the Items Investigated, and then a nation-wide analysis on laryngeal cancer will be processed for the standardization and optimization of radlotherapy.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

A Study on the Applicability of Social Security Platform to Smart City (사회보장플랫폼과 스마트시티에의 적용가능성에 관한 연구)

  • Jang, Bong-Seok
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.11
    • /
    • pp.321-335
    • /
    • 2020
  • Given that with the development of the 4th industry, interest and desire for smart cities are gradually increasing and related technologies are developed as a way to strengthen urban competitiveness by utilizing big data, information and communication technology, IoT, M2M, and AI, the purpose of this study is to find out how to achieve this goal on the premise of the idea of smart well fair city. In other words, the purpose is to devise a smart well-fair city in the care area, such as health care, medical care, and welfare, and see if it is feasible. With this recognition, the paper aimed to review the concept and scope of smart city, the discussions that have been made so far and the issues or limitations on its connection to social security and social welfare, and based on it, come up with the concept of welfare city. As a method of realizing the smart welfare city, the paper reviewed characteristics and features of a social security platform as well as the applicability of smart city, especially care services. Furthermore, the paper developed discussions on the standardization of the city in terms of political and institutional improvements, utilization of personal information and public data as well as ways of institutional improvement centering on social security information system. This paper highlights the importance of implementing the digitally based community care and smart welfare city that our society is seeking to achieve. With regard to the social security platform based on behavioral design and the 7 principles(6W1H method), the present paper has the limitation of dealing only with smart cities in the fields of healthcare, medicine, and welfare. Therefore, further studies are needed to investigate the effects of smart cities in other fields and to consider the application and utilization of technologies in various aspects and the corresponding impact on our society. It is expected that this paper will suggest the future course and vision not only for smart cities but also for the social security and welfare system and thereby make some contribution to improving the quality of people's lives through the requisite adjustments made in each relevant field.

The Effects of Sleep Apnea and Variables on Cognitive Function and the Mediating Effect of Depression (수면무호흡증과 수면변수가 인지기능에 미치는 영향과 우울증의 매개효과)

  • Park, Kyung Won;Kim, Hyeong Wook;Choi, Mal Rye;Kim, Byung Jo;Kim, Tae Hyung;Song, Ok Sun;Eun, Hun Jeong
    • Sleep Medicine and Psychophysiology
    • /
    • v.24 no.2
    • /
    • pp.86-96
    • /
    • 2017
  • Objectives: This study aimed to analyze causality among sleep apnea, depression and cognitive function in patients with obstructive sleep apnea. Methods: We reviewed the medical records of 105 patients with sleep apnea and snoring who underwent overnight polysomnography (PSG). We analyzed various biological data, sleep variables (sleep duration and percentage) and respiratory variables [arousal index (AI), periodic leg movement index (PLM index), snoring Index (SI), mean SpO2, minimum SpO2, apnea-hypopnea index (AHI), and respiratory disturbance index (RDI)]. We also analyzed various data by sleep, cognition, and mood related scales: Pittsburgh sleep quality index (PSQI), Epworth sleepiness scale (ESS), snoring index by scale (SIS), Montreal Cognitive Assessment-Korean (Moca-K), Mini-mental State Examination-Korean (MMSE-K), clinical dementia rating (CDR), and Beck Depression Inventory (BDI). We analyzed causation among sleep, and respiratory, mood, and cognition related scales in obstructive sleep apnea patients. We analyzed the mediating effects of depression on sleep apnea patient cognition. Results: As Duration N1 increased and Total sleep time (TST) decreased, MOCA-K showed negative causality (p < 0.01). As BDI and supine RDI increased, causality was negatively related to MOCA-K (p < 0.01). As PSQI (p < 0.001) and SIS (p < 0.01) increased and as MMSE-K (p < 0.01) decreased, causality was positively related to BDI. BDI was found to mediate the effect of age on MOCA-K in patients with obstructive sleep apnea. Conclusion: Duration N1, total sleep time, BDI, and supine RDI were associated with cognitive function in obstructive sleep apnea patients. Depression measured by BDI partially mediated cognitive decline in obstructive sleep apnea patients.

Corporate Bond Rating Using Various Multiclass Support Vector Machines (다양한 다분류 SVM을 적용한 기업채권평가)

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.