• Title/Summary/Keyword: school Information Management System

Search Result 2,584, Processing Time 0.035 seconds

The Reserch on Actual Condition of Crime of Arson Which Occurs in Korea and Its Countermeasures (방화범죄의 실태와 그 대책 - 관심도와 동기의 다양화에 대한 대응 -)

  • Choi, Jong-Tae
    • Korean Security Journal
    • /
    • no.1
    • /
    • pp.371-408
    • /
    • 1997
  • This article is the reserch on actual condition of crime of arson which occurs in Korea and its countermeasures. The the presented problem in this article are that (1) we have generally very low rate concern about the crime of arson contrary to realistic problems of rapid increase of crime of arson (2) as such criminal motives became so diverse as to the economic or criminal purpose unlike characteristic and mental deficiency of old days, and to countermeasure these problems effectively it presentation the necessity of systemantic research. Based on analysis of reality of arson, the tendency of this arson in Korea in the ratio of increase is said to be higher than those in violence crime or general fire rate. and further its rate is far more greater than those of the U.S.A. and Japan. Arson is considered to be a method of using fire as crime and in case of presently residence to be the abject, it is a public offense crime which aqccompany fatality in human life. This is the well It now fact to all of us. And further in order to presentation to the crime of arson, strictness of criminal law (criminal law No, 164 and 169, and fire protection law No. 110 and 111) and classification of arsonist as felony are institutionary reinforced to punish with certainty of possibility, Therefore, as tendency of arson has been increased compared to other nations, it is necessary to supplement strategical policy to bring out overall concerns of the seriousness of risk and damage of arson, which have been resulted from the lack of understanding. In characteristics analysis of crime of arson, (1) It is now reveald that, in the past such crime rate appeared far more within the boundary of town or city areas in the past, presently increased rate of arsons in rural areas are far more than in the town or small city areas, thereby showing characteristics of crime of arson extending nation wide. (2) general timetable of arson shows that night more than day time rate, and reveald that is trait behavior in secrecy.(3) arsonists are usually arrested at site or by victim or report of third person(82,9%).Investigation activities or self surrenders rate only 11.2%. The time span of arrest is normally the same day of arson and at times it takes more than one year to arrest. This reveals its necessity to prepare for long period of time for arrest, (4) age rate of arson is in their thirties mostly as compared to homicide, robbery and adultery, and considerable numbers of arsons are in old age of over fifties. It reveals age rate is increased (5) Over half of the arsonists are below the junior high school (6) the rate of convicts by thier records is based on first offenders primarily and secondly more than 4 time convicts. This apparently shows necessity of effective correctional education policy for their social assimilation together with re-investigation of human education at the primary and secondary education system in thier life. The examples of motivation for arosnits, such as personal animosity, fury, monetary swindle, luscious purpose and other aims of destroying of proof, and other social resistance, violence including ways of threatening, beside the motives of individual defects, are diverse and arsonic suicide and specifically suicidal accompany together keenly manifested. When we take this fact with the criminal theory, it really reveals arsons of crime are increasing and its casualities are serious and a point as a way of suicide is the anomie theory of Durkheim and comensurate with the theory of that of Merton, Specifically in the arson of industrial complex, it is revealed that one with revolutionary motive or revolting motive would do the arsonic act. For the policy of prevention of arsons, professional research work in organizational cooperation for preventive activities is conducted in municipal or city wise functions in the name of Parson Taskforces and beside a variety of research institutes in federal government have been operating effectively to countermeasure in many fields of research. Franch and Sweden beside the U.S. set up a overall operation of fire prevention research funtions and have obtained very successful result. Japan also put their research likewise for countermeasure. In this research as a way of preventive fire policy, first, it is necessary to accomodate the legal preventitive activities for fire prevention in judicial side and as an administrative side, (1) precise statistic management of crime of arson (2) establishment of professional research functions or a corporate (3) improvement of system for cooperative structural team for investigation of fires and menpower organization of professional members. Secondly, social mentality in individual prospect, recognition of fires by arson and youth education of such effect, educational program for development and practical promotion. Thirdly, in view of environmental side, the ways of actual performance by programming with the establishment of cooperative advancement in local social function elements with administrative office, habitants, school facilities and newspapers measures (2) establishment of personal protection where weak menpowers are displayed in special fire prevention measures. These measures are presented for prevention of crime of arson. The control of crime and prevention shall be prepared as a means of self defence by the principle of self responsibility Specifically arsonists usually aims at the comparatively weak control of fire prevention is prevalent and it is therefore necessary to prepare individual facilities with their spontaneous management of fire prevention instead of public municipal funtures of local geverment. As Clifford L. Karchmer asserted instead of concerns about who would commit arson, what portion of area would be the target of the arson. It is effective to minister spontaveously the fire prevention measure in his facility with the consideration of characteristics of arson. On the other hand, it is necessary for the concerned personnel of local goverment and groups to distribute to the local society in timely manner for new information about the fire prevention, thus contribute to effective result of fire prevention result. In consideration of these factors, it is inevitable to never let coincide with the phemonemon of arsons in similar or mimic features as recognized that these could prevail just an epedemic as a strong imitational attitude. In processing of policy to encounter these problems, it is necessary to place priority of city policy to enhancement of overall concerns toward the definitive essense of crime of arson.

  • PDF

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.

The effect of Big-data investment on the Market value of Firm (기업의 빅데이터 투자가 기업가치에 미치는 영향 연구)

  • Kwon, Young jin;Jung, Woo-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.99-122
    • /
    • 2019
  • According to the recent IDC (International Data Corporation) report, as from 2025, the total volume of data is estimated to reach ten times higher than that of 2016, corresponding to 163 zettabytes. then the main body of generating information is moving more toward corporations than consumers. So-called "the wave of Big-data" is arriving, and the following aftermath affects entire industries and firms, respectively and collectively. Therefore, effective management of vast amounts of data is more important than ever in terms of the firm. However, there have been no previous studies that measure the effects of big data investment, even though there are number of previous studies that quantitatively the effects of IT investment. Therefore, we quantitatively analyze the Big-data investment effects, which assists firm's investment decision making. This study applied the Event Study Methodology, which is based on the efficient market hypothesis as the theoretical basis, to measure the effect of the big data investment of firms on the response of market investors. In addition, five sub-variables were set to analyze this effect in more depth: the contents are firm size classification, industry classification (finance and ICT), investment completion classification, and vendor existence classification. To measure the impact of Big data investment announcements, Data from 91 announcements from 2010 to 2017 were used as data, and the effect of investment was more empirically observed by observing changes in corporate value immediately after the disclosure. This study collected data on Big Data Investment related to Naver 's' News' category, the largest portal site in Korea. In addition, when selecting the target companies, we extracted the disclosures of listed companies in the KOSPI and KOSDAQ market. During the collection process, the search keywords were searched through the keywords 'Big data construction', 'Big data introduction', 'Big data investment', 'Big data order', and 'Big data development'. The results of the empirically proved analysis are as follows. First, we found that the market value of 91 publicly listed firms, who announced Big-data investment, increased by 0.92%. In particular, we can see that the market value of finance firms, non-ICT firms, small-cap firms are significantly increased. This result can be interpreted as the market investors perceive positively the big data investment of the enterprise, allowing market investors to better understand the company's big data investment. Second, statistical demonstration that the market value of financial firms and non - ICT firms increases after Big data investment announcement is proved statistically. Third, this study measured the effect of big data investment by dividing by company size and classified it into the top 30% and the bottom 30% of company size standard (market capitalization) without measuring the median value. To maximize the difference. The analysis showed that the investment effect of small sample companies was greater, and the difference between the two groups was also clear. Fourth, one of the most significant features of this study is that the Big Data Investment announcements are classified and structured according to vendor status. We have shown that the investment effect of a group with vendor involvement (with or without a vendor) is very large, indicating that market investors are very positive about the involvement of big data specialist vendors. Lastly but not least, it is also interesting that market investors are evaluating investment more positively at the time of the Big data Investment announcement, which is scheduled to be built rather than completed. Applying this to the industry, it would be effective for a company to make a disclosure when it decided to invest in big data in terms of increasing the market value. Our study has an academic implication, as prior research looked for the impact of Big-data investment has been nonexistent. This study also has a practical implication in that it can be a practical reference material for business decision makers considering big data investment.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

Genetic Counseling in Korean Health Care System (한국 의료제도와 유전상담 서비스의 구축)

  • Kim, Hyon-J.
    • Journal of Genetic Medicine
    • /
    • v.8 no.2
    • /
    • pp.89-99
    • /
    • 2011
  • Over the years Korean health care system has improved in delivery of quality care to the general population for many areas of the health problems. The system is now being recognized in the world as the most cost effective one. It is covered by the uniform national health insurance policy for which most people in Korea are mandatory policy holders. Genetic counseling service, however, which is well recognized as an integral part of clinical genetics service deals with diagnosis and management of genetic condition as well as genetic information presentation and family support, is yet to be delivered in comprehensive way for the patients and families in need. Two major obstacles in providing genetic counseling service in korean health care system are identified; One is the lack of recognition for the need for genetic counseling service as necessary service by the national health insurance. Genetic counseling consumes a significant time in delivery and the current very low-fee schedule for physician service makes it very difficult to provide meaningful service. Second is the critical shortage of qualified professionals in the field of medical genetics and genetic counseling who can provide the service of genetic counseling in clinical setting. However, recognition and understanding of the fact that the scope and role of genetic counseling is expanding in post genomic era of personalized medicine for delivery of quality health care, will lead to the efforts to overcome obstacles in providing genetic counseling service in korean health care system. Only concerted efforts from health care policy makers of government on clinical genetics service and genetic counseling for establishing adequate reimbursement coverage and professional communities for developing educational program and certification process for professional genetic counselors, are necessary for the delivery of much needed clinical genetic counseling service in Korea.

The Actual Conditions and Improvement of the Eco-Forests Mater Plan, South Korea (우리나라 생태숲조성 기본계획 실태 및 개선방향)

  • Heo, Jae-Yong;Kim, Do-Gyun;Jeong, Jeong-Chae;Lee, Jeong
    • Korean Journal of Environment and Ecology
    • /
    • v.24 no.3
    • /
    • pp.235-248
    • /
    • 2010
  • This study was carried out to the actual conditions and improvement of the eco-forests master plan in South Korea, and suggested its problems and improvement direction. Results from survey and analysis of limiting factors or constraints in the construction plans of eco-forests in Korea revealed that there were highly frequent problems involving site feasibility, topographic aspect, and existing vegetation. The results of survey on the status of land use indicated that the average ratio of the use of private estate was 29.7%, so then it was estimated that a great amount of investment in purchase of eco-forest site would be required. Results from survey on major introduced facilities showed that there was high frequency of introduction of infrastructure, building facility, recreational facility, convenience facility, and information facility, and that there was low frequency of introduction of plant culture system, ecological facility, structural symbol and sculpture, and the likes. There was just one eco-forest park where more than 500 species of plants grew, and the result of investigation indicated that the diversity of plant species in 11 eco-forest parks was lower than the standards for construction of eco-forest. Results from analysis of the projects costs revealed that investment cost in facilities was higher than planting costs, and that a large amount of investment was made in the initial stage of the project. There was no planned budget for the purpose of cultivating and maintaining the plants and vegetation after construction of eco-forest. The basic concepts in construction of eco-forests were established according to the guidelines presented by the Korea Forest Service; however, the detailed work of the project was planned with its user-oriented approach. Then the construction of eco-forest was being planned following the directions, which would lead to development of a plant garden similar to arboretum or botanical garden. Therefore, it is required that the architect who designs eco-forest as well as the public officer concerned firmly establish the concepts of eco-forest, and that, through close analysis of development conditions, a candidate site to fit the purpose of constructing eco-forest be selected, and also a substantive management plan be established upon completion of construction of eco-forest.

A Study on the Present Condition of Four-Year University Curriculum for Introducing NCS Landscape Architecture (NCS 조경 분야 적용을 위한 4년제 대학 교육과정 현황분석)

  • Lee, Chang-Hun;Kim, Kyou-Sub;Lee, Won-Ho
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.37 no.3
    • /
    • pp.134-147
    • /
    • 2019
  • The purpose of this study was to analyze the functional unit system of NCS landscape field for correction and supplementation of NCS landscape field and the contents of the four-year college landscape course subject. First, 24 unconsolidated four-year universities were selected, and FGI was conducted and verified for 816 courses in 24 universities. The results of the study are summarized as follows, with three sections three, nine divisions and 65 sub-category. First, landscape design subjects accounted for 40.0% of the subjects organized by four-year universities. In addition, the ratio of 12.9% for ecological landscape, 11.3% for landscape construction, 10.2% for others, 10.0% for landscape information, 6.6% for landscape culture and 3.7% for landscape management was surveyed. Balanced and efficient modification and reinforcement of NCS is required in the future. Second, 10(18.9%) units with matching NCS performance criteria and educational objectives were found to be capable of different units(18.9%), 15(28.3%), and 37subjects with inconsistent NCS unit capability (56.9%). Third, looking at the criteria for the reference of each unit of capability presented by the NCS, it is deemed that one unit of capability should be organized separately to improve the practical ability, since it includes the contents of basic knowledge learning. Fourth, the objectives pursued on the basis of the contents of the NCS capability unit and four-year college curriculum were developed by focusing on the development of unit capabilities in the field of landscape construction and landscape management compared to the field of landscape design. It has been shown that a balance is needed for future development. This study is intended to put forward further research that re-examine specific curriculum assessment criteria that have not been classified in the course of classifications based on the curriculum handbook, which excludes interferences from each school.

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

A Hybrid SVM Classifier for Imbalanced Data Sets (불균형 데이터 집합의 분류를 위한 하이브리드 SVM 모델)

  • Lee, Jae Sik;Kwon, Jong Gu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.125-140
    • /
    • 2013
  • We call a data set in which the number of records belonging to a certain class far outnumbers the number of records belonging to the other class, 'imbalanced data set'. Most of the classification techniques perform poorly on imbalanced data sets. When we evaluate the performance of a certain classification technique, we need to measure not only 'accuracy' but also 'sensitivity' and 'specificity'. In a customer churn prediction problem, 'retention' records account for the majority class, and 'churn' records account for the minority class. Sensitivity measures the proportion of actual retentions which are correctly identified as such. Specificity measures the proportion of churns which are correctly identified as such. The poor performance of the classification techniques on imbalanced data sets is due to the low value of specificity. Many previous researches on imbalanced data sets employed 'oversampling' technique where members of the minority class are sampled more than those of the majority class in order to make a relatively balanced data set. When a classification model is constructed using this oversampled balanced data set, specificity can be improved but sensitivity will be decreased. In this research, we developed a hybrid model of support vector machine (SVM), artificial neural network (ANN) and decision tree, that improves specificity while maintaining sensitivity. We named this hybrid model 'hybrid SVM model.' The process of construction and prediction of our hybrid SVM model is as follows. By oversampling from the original imbalanced data set, a balanced data set is prepared. SVM_I model and ANN_I model are constructed using the imbalanced data set, and SVM_B model is constructed using the balanced data set. SVM_I model is superior in sensitivity and SVM_B model is superior in specificity. For a record on which both SVM_I model and SVM_B model make the same prediction, that prediction becomes the final solution. If they make different prediction, the final solution is determined by the discrimination rules obtained by ANN and decision tree. For a record on which SVM_I model and SVM_B model make different predictions, a decision tree model is constructed using ANN_I output value as input and actual retention or churn as target. We obtained the following two discrimination rules: 'IF ANN_I output value <0.285, THEN Final Solution = Retention' and 'IF ANN_I output value ${\geq}0.285$, THEN Final Solution = Churn.' The threshold 0.285 is the value optimized for the data used in this research. The result we present in this research is the structure or framework of our hybrid SVM model, not a specific threshold value such as 0.285. Therefore, the threshold value in the above discrimination rules can be changed to any value depending on the data. In order to evaluate the performance of our hybrid SVM model, we used the 'churn data set' in UCI Machine Learning Repository, that consists of 85% retention customers and 15% churn customers. Accuracy of the hybrid SVM model is 91.08% that is better than that of SVM_I model or SVM_B model. The points worth noticing here are its sensitivity, 95.02%, and specificity, 69.24%. The sensitivity of SVM_I model is 94.65%, and the specificity of SVM_B model is 67.00%. Therefore the hybrid SVM model developed in this research improves the specificity of SVM_B model while maintaining the sensitivity of SVM_I model.