• Title/Summary/Keyword: Automated

Search Result 5,538, Processing Time 0.03 seconds

Radiosynthesis of $[^{11}C]6-OH-BTA-1$ in Different Media and Confirmation of Reaction By-products. ($[^{11}C]6-OH-BTA-1$ 조제 시 생성되는 부산물 규명과 반응용매에 따른 표지 효율 비교)

  • Lee, Hak-Jeong;Jeong, Jae-Min;Lee, Yun-Sang;Kim, Hyung-Woo;Lee, Eun-Kyoung;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.3
    • /
    • pp.241-246
    • /
    • 2007
  • Purpose: $[^{11}C]6-OH-BTA-1$ ([N-methyl-$^{11}C$]2-(4'-methylaminophenyl)-6-hydroxybenzothiazole, 1), a -amyloid imaging agent for the diagnosis of Alzheimer's disease in PET, can be labeled with higher yield by a simple loop method. During the synthesis of $[^{11}C]1$, we found the formation of by-products in various solvents, e.g., methylethylketone (MEK), cyclohexanone (CHO), diethylketone (DEK), and dimethylformamide (DMF). Materials and Methods: In Automated radiosynthesis module, 1 mg of 4-aminophenyl-6-hydroxybenzothiazole (4) in 100 l of each solvent was reacted with $[^{11}C]methyl$ triflate in HPLC loop at room temperature (RT). The reaction mixture was separated by semi-preparative HPLC. Aliquots eluted at 14.4, 16.3 and 17.6 min were collected and analyzed by analytical HPLC and LC/MS spectrometer. Results: The labeling efficiencies of $[^{11}C]1$ were $86.0{\pm}5.5%$, $59.7{\pm}2.4%$, $29.9{\pm}1.8%$, and $7.6{\pm}0.5%$ in MEK, CHO, DEK and DMF, respectively. The LC/MS spectra of three products eluted at 14.4, 16.3 and 17.6 mins showed m/z peaks at 257.3 (M+1), 257.3 (M+1) and 271.3 (M+1), respectively, indicating their structures as 1, 2-(4'-aminophenyl)-6-methoxybenzothiazole (2) and by-product (3), respectively. Ratios of labeling efficiencies for the three products $([^{11}C]1:[^{11}C]2:[^{11}C]3)$ were $86.0{\pm}5.5%:5.0{\pm}3.4%:1.5{\pm}1.3%$ in MEK, $59.7{\pm}2.4%:4.7{\pm}3.2%:1.3{\pm}0.5%$ in CHO, $9.9{\pm}1.8%:2.0{\pm}0.7%:0.3{\pm}0.1%$ in DEK and $7.6{\pm}0.5%:0.0%:0.0%$ in DMF, respectively. Conclusion: The labeling efficiency of $[^{11}C]1$ was the highest when MEK was used as a reaction solvent. As results of mass spectrometry, 1 and 2 were conformed. 3 was presumed.

Assessment of Cerebral Hemodynamic Changes in Pediatric Patients with Moyamoya Disease Using Probabilistic Maps on Analysis of Basal/Acetazolamide Stress Brain Perfusion SPECT (소아 모야모야병에서 뇌확률지도를 이용한 수술전후 혈역학적 변화 분석)

  • Lee, Ho-Young;Lee, Jae-Sung;Kim, Seung-Ki;Wang, Kyu-Chang;Cho, Byung-Kyu;Chung, June-Key;Lee, Myung-Chul;Lee, Dong-Soo
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.42 no.3
    • /
    • pp.192-200
    • /
    • 2008
  • To evaluate the hemodynamic changes and the predictive factors of the clinical outcome in pediatric patients with moyamoya disease, we analyzed pre/post basal/acetazolamide stress brain perfusion SPECT with automated volume of interest (VOIs) method. Methods: Total fifty six (M:F = 33:24, age $6.7{\pm}3.2$ years) pediatric patients with moyamoya disease, who underwent basal/acetazolamide stress brain perfusion SPECT within 6 before and after revascularization surgery (encephalo-duro-arterio-synangiosis (EDAS) with frontal encephalo-galeo-synangiosis (EGS) and EDAS only followed on contralateral hemisphere), and followed-up more than 6 months after post-operative SPECT, were included. A mean follow-up period after post-operative SPECT was $33{\pm}21$ months. Each patient's SPECT image was spatially normalized to Korean template with the SPM2. For the regional count normalization, the count of pons was used as a reference region. The basal/acetazolamide-stressed cerebral blood flow (CBF), the cerebral vascular reserve index (CVRI), and the extent of area with significantly decreased basal/acetazolamide- stressed rCBF than age-matched normal control were evaluated on both medial frontal, frontal, parietal, occipital lobes, and whole brain in each patient's images. The post-operative clinical outcome was assigned as good, poor according to the presence of transient ischemic attacks and/or fixed neurological deficits by pediatric neurosurgeon. Results: In a paired t-test, basal/acetazolamide-stressed rCBF and the CVRI were significantly improved after revascularization (p<0.05). The significant difference in the pre-operative basal/acetazolamide-stressed rCBF and the CVRI between the hemispheres where EDAS with frontal EGS was performed and their contralateral counterparts where EDAS only was done disappeared after operation (p<0.05). In an independent student t-test, the pre-operative basal rCBF in the medial frontal gyrus, the post-operative CVRI in the frontal lobe and the parietal lobe of the hemispheres with EDAS and frontal EGS, the post-operative CVRI, and ${\Delta}CVRI$ showed a significant difference between patients with a good and poor clinical outcome (p<0.05). In a multivariate logistic regression analysis, the ${\Delta}CVRI$ and the post-operative CVRI of medial frontal gyrus on the hemispheres where EDAS with frontal EGS was performed were the significant predictive factors for the clinical outcome (p =0.002, p =0.015), Conclusion: With probabilistic map, we could objectively evaluate pre/post-operative hemodynamic changes of pediatric patients with moyamoya disease. Specifically the post-operative CVRI and the post-operative CVRI of medial frontal gyrus where EDAS with frontal EGS was done were the significant predictive factors for further clinical outcomes.

Influence of Age on The Adenosine Deaminase Activity in Patients with Exudative Pleural Effusion (연령의 증가가 삼출성 흉수 Adenosine Deaminase 활성도에 미치는 영향)

  • Yeon, Kyu-Min;Kim, Chong-Ju;Kim, Jeong-Soo;Kim, Chi-Hoon
    • Tuberculosis and Respiratory Diseases
    • /
    • v.53 no.5
    • /
    • pp.530-541
    • /
    • 2002
  • Background : Pleural fluid adenosine deaminase (ADA) activity can be helpful in a differntial diagnosis of an exudative pleural effusion because it is increased in a tuberculous pleural effusion. The ADA activity is determined mainly by the lymphocyte function. Age-associated immune decline is characterized by a decrease in T-lymphocyte function. For that reason, the pleural fluid ADA level would be lower in older patients with exudative pleural effusion. This study focused on the influence of age on the pleural fluid ADA activity in patients with exudative pleural effusion. Methods : A total of 81 patients with exudative pleural effusion were enrolled in this study. In all patients, the pleural fluid ADA activity was measured using an automated kinetic method. Results : The mean age of the patients was $52.7{\pm}21.2$ years. In all patients with exudative pleural effusion, the pleural fluid ADA activity revealed a significant difference between young patients (under 65 years of age) and old patients (p<0.05), and showed a negative correlation with age (r=-0.325, p<0.05). In the 60 patients with a tuberculous pleural effusion, the pleural fluid ADA activity revealed a significant difference between the young and older patients : $103.5{\pm}36.9$ IU/L in young patients Vs. $72.2{\pm}31.6$ IU/L in old patients (p<0.05), and showed a negative correlation with age (r=-0.384, p<0.05). In the 21 patients with non-tuberculous exudative pleural effusion, the pleural fluid ADA activity of the young patients and old patients was similar : $23.7{\pm}15.3$ IU/L in young patients Vs. $16.1{\pm}10.2$ IU/L in old patients (p>0.05), and did not show any correlation with age (r=-0.263, p>0.05). The diagnostic cutoff value of pleural fluid ADA activity for tuberculous pleural effusion was lower in the older patients (25.9 IU/L) than in the younger patients (49.1 IU/L) or all patients (38.4 IU/L) with exudative pleural effusion. Conclusion : Tuberculous pleural effusion is an important possibility to consider in older patients with a clinical suspicion of a tuberculous pleural effusion, although no marked increase in the pleural fluid ADA activity is usually detected. For a diagnosis of a tuberculous pleural effusion in old patients, the cutoff for the pleural fluid ADA activity should be set lower.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

Records Management and Archives in Korea : Its Development and Prospects (한국 기록관리행정의 변천과 전망)

  • Nam, Hyo-Chai
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.1 no.1
    • /
    • pp.19-35
    • /
    • 2001
  • After almost one century of discontinuity in the archival tradition of Chosun dynasty, Korea entered the new age of records and archival management by legislating and executing the basic laws (The Records and Archives Management of Public Agencies Ad of 1999). Annals of Chosun dynasty recorded major historical facts of the five hundred years of national affairs. The Annals are major accomplishment in human history and rare in the world. It was possible because the Annals were composed of collected, selected and complied records of primary sources written and compiled by generations of historians, As important public records are needed to be preserved in original forms in modern archives, we had to develop and establish a modern archival system to appraise and select important national records for archival preservation. However, the colonialization of Korea deprived us of the opportunity to do the task, and our fine archival tradition was not succeeded. A centralized archival system began to develop since the establishment of GARS under the Ministry of Government Administration in 1969. GARS built a modem repository in Pusan in 1984 succeeding to the tradition of History Archives of Chosun dynasty. In 1998, GARS moved its headquarter to Taejon Government Complex and acquired state-of-the-art audio visual archives preservation facilities. From 1996, GARS introduced an automated archival management system to remedy the manual registration and management system complementing the preservation microfilming. Digitization of the holdings was the key project to provided the digital images of archives to users. To do this, the GARS purchased new computer/server systems and developed application softwares. Parallel to this direction, GARS drastically renovated its manpower composition toward a high level of professionalization by recruiting more archivists with historical and library science backgrounds. Conservators and computer system operators were also recruited. The new archival laws has been in effect from January 1, 2000. The new laws made following new changes in the field of records and archival administration in Korea. First, the laws regulate the records and archives of all public agencies including the Legislature, the Judiciary, the Administration, the constitutional institutions, Army, Navy, Air Force, and National Intelligence Service. A nation-wide unified records and archives management system became available. Second, public archives and records centers are to be established according to the level of the agency; a central archives at national level, special archives for the National Assembly and the Judiciary, local government archives for metropolitan cities and provinces, records center or special records center for administrative agencies. A records manager will be responsible for the records management of each administrative divisions. Third, the records in the public agencies are registered in the computer system as they are produced. Therefore, the records are traceable and will be searched or retrieved easily through internet or computer network. Fourth, qualified records managers and archivists who are professionally trained in the field of records management and archival science will be assigned mandatorily to guarantee the professional management of records and archives. Fifth, the illegal treatment of public records and archives constitutes a punishable crime. In the future, the public records find archival management will develop along with Korean government's 'Electronic Government Project.' Following changes are in prospect. First, public agencies will digitize paper records, audio-visual records, and publications as well as electronic documents, thus promoting administrative efficiency and productivity. Second, the National Assembly already established its Special Archives. The judiciary and the National Intelligence Service will follow it. More archives will be established at city and provincial levels. Third, the more our society develop into a knowledge-based information society, the more the records management function will become one of the important national government functions. As more universities, academic associations, and civil societies participate in promoting archival awareness and in establishing archival science, and more people realize the importance of the records and archives management up to the level of national public campaign, the records and archival management in Korea will develop significantly distinguishable from present practice.

A Study on the Establishment of Comparison System between the Statement of Military Reports and Related Laws (군(軍) 보고서 등장 문장과 관련 법령 간 비교 시스템 구축 방안 연구)

  • Jung, Jiin;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.109-125
    • /
    • 2020
  • The Ministry of National Defense is pushing for the Defense Acquisition Program to build strong defense capabilities, and it spends more than 10 trillion won annually on defense improvement. As the Defense Acquisition Program is directly related to the security of the nation as well as the lives and property of the people, it must be carried out very transparently and efficiently by experts. However, the excessive diversification of laws and regulations related to the Defense Acquisition Program has made it challenging for many working-level officials to carry out the Defense Acquisition Program smoothly. It is even known that many people realize that there are related regulations that they were unaware of until they push ahead with their work. In addition, the statutory statements related to the Defense Acquisition Program have the tendency to cause serious issues even if only a single expression is wrong within the sentence. Despite this, efforts to establish a sentence comparison system to correct this issue in real time have been minimal. Therefore, this paper tries to propose a "Comparison System between the Statement of Military Reports and Related Laws" implementation plan that uses the Siamese Network-based artificial neural network, a model in the field of natural language processing (NLP), to observe the similarity between sentences that are likely to appear in the Defense Acquisition Program related documents and those from related statutory provisions to determine and classify the risk of illegality and to make users aware of the consequences. Various artificial neural network models (Bi-LSTM, Self-Attention, D_Bi-LSTM) were studied using 3,442 pairs of "Original Sentence"(described in actual statutes) and "Edited Sentence"(edited sentences derived from "Original Sentence"). Among many Defense Acquisition Program related statutes, DEFENSE ACQUISITION PROGRAM ACT, ENFORCEMENT RULE OF THE DEFENSE ACQUISITION PROGRAM ACT, and ENFORCEMENT DECREE OF THE DEFENSE ACQUISITION PROGRAM ACT were selected. Furthermore, "Original Sentence" has the 83 provisions that actually appear in the Act. "Original Sentence" has the main 83 clauses most accessible to working-level officials in their work. "Edited Sentence" is comprised of 30 to 50 similar sentences that are likely to appear modified in the county report for each clause("Original Sentence"). During the creation of the edited sentences, the original sentences were modified using 12 certain rules, and these sentences were produced in proportion to the number of such rules, as it was the case for the original sentences. After conducting 1 : 1 sentence similarity performance evaluation experiments, it was possible to classify each "Edited Sentence" as legal or illegal with considerable accuracy. In addition, the "Edited Sentence" dataset used to train the neural network models contains a variety of actual statutory statements("Original Sentence"), which are characterized by the 12 rules. On the other hand, the models are not able to effectively classify other sentences, which appear in actual military reports, when only the "Original Sentence" and "Edited Sentence" dataset have been fed to them. The dataset is not ample enough for the model to recognize other incoming new sentences. Hence, the performance of the model was reassessed by writing an additional 120 new sentences that have better resemblance to those in the actual military report and still have association with the original sentences. Thereafter, we were able to check that the models' performances surpassed a certain level even when they were trained merely with "Original Sentence" and "Edited Sentence" data. If sufficient model learning is achieved through the improvement and expansion of the full set of learning data with the addition of the actual report appearance sentences, the models will be able to better classify other sentences coming from military reports as legal or illegal. Based on the experimental results, this study confirms the possibility and value of building "Real-Time Automated Comparison System Between Military Documents and Related Laws". The research conducted in this experiment can verify which specific clause, of several that appear in related law clause is most similar to the sentence that appears in the Defense Acquisition Program-related military reports. This helps determine whether the contents in the military report sentences are at the risk of illegality when they are compared with those in the law clauses.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.