• Title/Summary/Keyword: Performance Information Use

Search Result 5,650, Processing Time 0.043 seconds

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Study on the screening method for determination of heavy metals in cellular phone for the restrictions on the use of certain hazardous substances (RoHS) (유해물질 규제법(RoHS)에 따른 휴대폰 내의 중금속 함유량 측정을 위한 스크리닝법 연구)

  • Kim, Y.H.;Lee, J.S.;Lim, H.B.
    • Analytical Science and Technology
    • /
    • v.23 no.1
    • /
    • pp.1-14
    • /
    • 2010
  • It is of importance that all countries in worldwide, including EU and China, have adopted the Restrictions on the use of certain Hazardous Substances (RoHS) for all electronics. IEC62321 document, which was published by the International Electronics Committee (IEC) can have conflicts with the standards in the market. On the contrary Publicly Accessible Specification (PAS) for sampling published by IEC TC111 can be adopted for complementary application. In this work, we tried to find a route to disassemble and disjoint cellular phone sample, based on PAS and compare the screening methods available in the market. For this work, the cellular phone produced in 2001, before the regulation was born, was chosen for better detection. Although X-ray Fluorescence (XRF) showed excellent performance for screening, fast and easy handling, it can give information on the surface, not the bulk, and have some limitations due to significant matrix interference and lack of variety of standards for quantification. It means that screening with XRF sometimes requires supplementary tool. There are several techniques available in the market of analytical instruments. Laser ablation (LA) ICP-MS, energy dispersive (ED) XRF and scanning electron microscope (SEM)-energy dispersive X-ray (EDX) were demonstrated for screening a cellular phone. For quantitative determination, graphite furnace atomic absorption spectrometry (GF-AAS) was employed. Experimental results for Pb in a battery showed large difference in analytical results in between XRF and GF-AAS, i.e., 0.92% and 5.67%, respectively. In addition, the standard deviation of XRF was extremely large in the range of 23-168%, compared with that in the range of 1.9-92.3% for LA-ICP-MS. In conclusion, GF-AAS was required for quantitative analysis although EDX was used for screening. In this work, it was proved that LA-ICP-MS can be used as a screening method for fast analysis to determine hazardous elements in electrical products.

Design and Implementation of the SSL Component based on CBD (CBD에 기반한 SSL 컴포넌트의 설계 및 구현)

  • Cho Eun-Ae;Moon Chang-Joo;Baik Doo-Kwon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.3
    • /
    • pp.192-207
    • /
    • 2006
  • Today, the SSL protocol has been used as core part in various computing environments or security systems. But, the SSL protocol has several problems, because of the rigidity on operating. First, SSL protocol brings considerable burden to the CPU utilization so that performance of the security service in encryption transaction is lowered because it encrypts all data which is transferred between a server and a client. Second, SSL protocol can be vulnerable for cryptanalysis due to the key in fixed algorithm being used. Third, it is difficult to add and use another new cryptography algorithms. Finally. it is difficult for developers to learn use cryptography API(Application Program Interface) for the SSL protocol. Hence, we need to cover these problems, and, at the same time, we need the secure and comfortable method to operate the SSL protocol and to handle the efficient data. In this paper, we propose the SSL component which is designed and implemented using CBD(Component Based Development) concept to satisfy these requirements. The SSL component provides not only data encryption services like the SSL protocol but also convenient APIs for the developer unfamiliar with security. Further, the SSL component can improve the productivity and give reduce development cost. Because the SSL component can be reused. Also, in case of that new algorithms are added or algorithms are changed, it Is compatible and easy to interlock. SSL Component works the SSL protocol service in application layer. First of all, we take out the requirements, and then, we design and implement the SSL Component, confidentiality and integrity component, which support the SSL component, dependently. These all mentioned components are implemented by EJB, it can provide the efficient data handling when data is encrypted/decrypted by choosing the data. Also, it improves the usability by choosing data and mechanism as user intend. In conclusion, as we test and evaluate these component, SSL component is more usable and efficient than existing SSL protocol, because the increase rate of processing time for SSL component is lower that SSL protocol's.

Research about feature selection that use heuristic function (휴리스틱 함수를 이용한 feature selection에 관한 연구)

  • Hong, Seok-Mi;Jung, Kyung-Sook;Chung, Tae-Choong
    • The KIPS Transactions:PartB
    • /
    • v.10B no.3
    • /
    • pp.281-286
    • /
    • 2003
  • A large number of features are collected for problem solving in real life, but to utilize ail the features collected would be difficult. It is not so easy to collect of correct data about all features. In case it takes advantage of all collected data to learn, complicated learning model is created and good performance result can't get. Also exist interrelationships or hierarchical relations among the features. We can reduce feature's number analyzing relation among the features using heuristic knowledge or statistical method. Heuristic technique refers to learning through repetitive trial and errors and experience. Experts can approach to relevant problem domain through opinion collection process by experience. These properties can be utilized to reduce the number of feature used in learning. Experts generate a new feature (highly abstract) using raw data. This paper describes machine learning model that reduce the number of features used in learning using heuristic function and use abstracted feature by neural network's input value. We have applied this model to the win/lose prediction in pro-baseball games. The result shows the model mixing two techniques not only reduces the complexity of the neural network model but also significantly improves the classification accuracy than when neural network and heuristic model are used separately.

Adaptive Row Major Order: a Performance Optimization Method of the Transform-space View Join (적응형 행 기준 순서: 변환공간 뷰 조인의 성능 최적화 방법)

  • Lee Min-Jae;Han Wook-Shin;Whang Kyu-Young
    • Journal of KIISE:Databases
    • /
    • v.32 no.4
    • /
    • pp.345-361
    • /
    • 2005
  • A transform-space index indexes objects represented as points in the transform space An advantage of a transform-space index is that optimization of join algorithms using these indexes becomes relatively simple. However, the disadvantage is that these algorithms cannot be applied to original-space indexes such as the R-tree. As a way of overcoming this disadvantages, the authors earlier proposed the transform-space view join algorithm that joins two original- space indexes in the transform space through the notion of the transform-space view. A transform-space view is a virtual transform-space index that allows us to perform join in the transform space using original-space indexes. In a transform-space view join algorithm, the order of accessing disk pages -for which various space filling curves could be used -makes a significant impact on the performance of joins. In this paper, we Propose a new space filling curve called the adaptive row major order (ARM order). The ARM order adaptively controls the order of accessing pages and significantly reduces the one-pass buffer size (the minimum buffer size required for guaranteeing one disk access per page) and the number of disk accesses for a given buffer size. Through analysis and experiments, we verify the excellence of the ARM order when used with the transform-space view join. The transform-space view join with the ARM order always outperforms existing ones in terms of both measures used: the one-pass buffer size and the number of disk accesses for a given buffer size. Compared to other conventional space filling curves used with the transform-space view join, it reduces the one-pass buffer size by up to 21.3 times and the number of disk accesses by up to $74.6\%$. In addition, compared to existing spatial join algorithms that use R-trees in the original space, it reduces the one-pass buffer size by up to 15.7 times and the number of disk accesses by up to $65.3\%$.

Beak Trimming Methods - Review -

  • Glatz, P.C.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.13 no.11
    • /
    • pp.1619-1637
    • /
    • 2000
  • A review was undertaken to obtain information on the range of beak-trimming methods available or under development. Beak-trimming of commercial layer replacement pullets is a common yet critical management tool that can affect the performance for the life of the flock. The most obvious advantage of beak-trimming is a reduction in cannibalism although the extent of the reduction in cannibalism depends on the strain, season, and type of housing, flock health and other factors. Beak-trimming also improves feed conversion by reducing food wastage. A further advantage of beak-trimming is a reduction in the chronic stress associated with dominance interactions in the flock. Beak-trimming of birds at 7-10 days is favoured by Industry but research over last 10 years has shown that beak-trimming at day-old causes the least stress on birds and efforts are needed to encourage Industry to adopt the practice of beak-trimming birds at day-old. Proper beak-trimming can result in greatly improved layer performance but improper beak-trimming can ruin an other wise good flock of hens. Re-trimming is practiced in most flocks, although there are some flocks that only need one trimming. Given the continuing welfare scrutiny of using a hot blade to cut the beak, attempts have been made to develop more welfare friendly methods of beak-trimming. Despite the developments in design of hot blade beak-trimmers the process has remained largely unchanged. That is, a red-hot blade cuts and cauterises the beak. The variables in the process are blade temperature, cauterisation time, operator ability, severity of trimming, age of trimming, strain of bird and beak length. This method of beak-trimming is still overwhelmingly favoured in Industry and there appears to be no other alternative procedures that are more effective. Sharp secateurs have been used trim the upper beak of both layers and turkeys. Bleeding from the upper mandible ceases shortly after the operation, and despite the regrowth of the beak a reduction of cannibalism has been reported. Very few differences have been noted between behaviour and production of the hot blade and cold blade cut chickens. This method has not been used on a large scale in Industry. There are anecdotal reports of cannibalism outbreaks in birds with regrown beaks. A robotic beak-trimming machine was developed in France, which permitted simultaneous, automated beak-trimming and vaccination of day-old chicks of up to 4,500 chickens per hour. Use of the machine was not successful because if the chicks were not loaded correctly they could drop off the line, receive excessive beak-trimming or very light trimming. Robotic beak-trimming was not effective if there was a variation in the weight or size of chickens. Capsaicin can cause degeneration of sensory nerves in mammals and decreases the rate of beak regrowth by its action on the sensory nerves. Capsaicin is a cheap, non-toxic substance that can be readily applied at the time of less severe beak-trimming. It suffers the disadvantage of causing an extreme burning sensation in operators who come in contact with the substance during its application to the bird. Methods of applying the substance to minimise the risk to operators of coming in contact with capsaicin need to be explored. A method was reported which cuts the beaks with a laser beam in day-old chickens. No details were provided on the type of laser used, or the severity of beak-trimming, but by 16 weeks the beaks of laser trimmed birds resembled the untrimmed beaks, but without the bill tip. Feather pecking and cannibalism during the laying period were highest among the laser trimmed hens. Currently laser machines are available that are transportable and research to investigate the effectiveness of beak-trimming using ablasive and coagulative lasers used in human medicine should be explored. Liquid nitrogen was used to declaw emu toes but was not effective. There was regrowth of the claws and the time and cost involved in the procedure limit the potential of using this process to beak-trim birds.

Performance Evaluation of Advance Warning System for Transporting Hazardous Materials (위험물 운송을 위한 조기경보시스뎀 성능평가)

  • Oh Sei-Chang;Cho Yong-Sung
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.4 no.1 s.6
    • /
    • pp.15-29
    • /
    • 2005
  • Truck Shipment Safety Information, which is a part of the development of NERIS is divided into Optimal Route Guidance System and Emergency Response System. This research is for establishing an advance warning system, which aims for preventing damages(fire, explosion, gas-escape etc.) and detecting incidents that are able to happen during transporting hazardous materials in advance through monitoring the position of moving vehicles and the state of hazardous materials in real-time. This research is peformed to confirm the practical possibility of application of the advance warning system that monitors whether the hazardous materials transport vehicles move the allowed routes, finds the time and the location of incidents of the vehicles promptly and develops the emergency system that is able to respond to the incidents as well by using the technologies of CPS, CDMA and CIS with testing the ability of performance. As the results of the test, communication accuracies are 99$\%$ in freeway, 96$\%$ in arterial, 97$\%$ in hilly sections, 99$\%$ in normal sections, 96$\%$ in local sections, 99$\%$ in urban sections and 98$\%$ in tunnels. According to those results, the system has been recorded a high success rate of communication that enough to apply to the real site. However, the weak point appeared through the testing is that the system has a limitation of communication that is caused in the rural areas and certain areas where are fewer antennas that make communication possible between on-board unit and management server. Consequently, for the practical use of this system, it is essential to develop the exclusive en-board unit for the vehicles and find the method that supplements the receiving limitation of the GPS coordinates inside tunnels. Additionally, this system can be used to regulate illegal acts automatically such as illegal negligence of hazardous materials. And the system can be applied to the study about an application scheme as a guideline for transporting hazardous materials because there is no certain management system and act of toxic substances in Korea.

  • PDF

Development and Performance Test of Preamplifier and Amplifier for Gamma Probe (감마프로브용 전단증폭기와 주증폭기의 개발과 성능 평가)

  • Bong, Jung-Kyun;Kim, Hee-Joung;Lee, Jong-Doo;Kwon, Soo-Il
    • The Korean Journal of Nuclear Medicine
    • /
    • v.33 no.1
    • /
    • pp.100-109
    • /
    • 1999
  • Purpose: Preamplifier and amplifier are very important parts for developing a portable counting or imaging gamma probe. They can be used for analyzing pulses containing energy and position information for the emitted radiations. The commercial Nuclear Instrument Modules (NIMs) can be used for processing these pulses. However, it may be improper to use NIMs in developing a portable gamma probe, because of its size and high price. The purpose of this study was to develop both preamplifier and amplifier and measure their performance characteristics. Materials and Methods: The preamplifier and amplifier were designed as a charge sensitive device and a capacitor resistor-resistor capacitor (CR-RC) electronic circuit, respectively, and they were mounted on a print circuit board (PCB). We acquired and analyzed energy spectra for Tc-99m and Cs-137 using both PCB and NIMs. Multichannel analyzer (Accuspec/A, Canberra Industries Inc., Meriden Connecticut, U.S.A) and scintillation detectors (EP-047(Bicron Saint-Gobain/Norton Industrial EP-047 (Ceramics Co., Ohio, U.S.A) with $2"{\times}2"$ NaI(T1) crystal and R1535 (Hamamatsu Photonics K.K., Electron Tube Center, Shizuoka-ken, Japan) with $1"{\times}1"$ NaI(T1) crystal were used for acquiring the energy spectra. Results: Using PCB, energy resolutions of EP-047 detectors for Tc-99m and Cs-137 were 12.92% and 5.01%, respectively, whereas R1535 showed 13.75% and 5.19% of energy resolution. Using the NIM devices, energy resolutions of EP-047 detector for Tc-99m and Cs-137 were measured as 14.6% and 7.58%, respectively. However, reliable energy spectrum of R1535 detector could not be acquired, since its photomultiplier tube (PMT) requires a specific type of preamplifier. Conclusion: We developed a special preamplifier and amplifier suitable for a small sized gamma probe that showed good energy resolutions independent of PMT types. The results indicate that the PCB can be used in developing both counting and imaging gamma probe.

  • PDF

A Suvey on Satisfaction Measurement of Automatic Milking System in Domestic Dairy Farm (자동착유시스템 설치농가의 설치 후 만족도에 관한 실태조사)

  • Ki, Kwang-Seok;Kim, Jong-Hyeong;Jeong, Young-Hun;Kim, Yun-Ho;Park, Sung-Jai;Kim, Sang-Bum;Lee, Wang-Shik;Lee, Hyun-June;Cho, Won-Mo;Baek, Kwang-Soo;Kim, Hyeon-Shup;Kwon, Eung-Gi;Kim, Wan-Young;Jeo, Joon-Mo
    • Journal of Animal Environmental Science
    • /
    • v.17 no.1
    • /
    • pp.39-48
    • /
    • 2011
  • The present survey was conducted to provide basic information on automatic milking system (AMS) in relation to purchase motive, milk yield and quality, customer satisfaction, difficulties of operation and customer suggestions, etc. Purchase motives of AMS were insufficient labor (44%), planning of dairy experience farm (25%), better performance of high yield cows (19%) and others (6%), respectively. Average cow performance after using AMS was 30.9l/d for milk yield, 3.9% for milk fat, 9,100/ml for bacterial counts. Sixty-eight percentage of respondents were very positive in response to AMS use for their successors but 18% were negative. The AMS operators were owner (44%), successor (44%), wife (6%) and company worker (6%), respectively. The most difficulty (31%) in using AMS was operating the system and complicated program manual. The rate of response to system error and breakdown was 25%. The reasons for culling cow after using AMS were mastitis (28%), reproduction failure (19%), incorrect teat placement (12%), metabolic disease (7%) and others (14%), respectively. Fifty-six percentages of the respondents made AMS maintenance contract and 44% did not. Average annual cost of the maintenance contract was 6,580,000 won. Average score for AMS satisfaction measurement (1 to 5 range) was 3.2 with decrease of labor cost 3.7, company A/S 3.6, increase of milk yield 3.2 and decrease of somatic cell count 2.8, respectively. Suggestions for the higher efficiency in using AMS were selecting cows with correct udder shape and teat placement, proper environment, capital and land, and attitude for continuous observation. Systematic consulting was highly required for AMS companies followed by low cost for AMS setup and systematization of A/S.