• Title/Summary/Keyword: Selection Combining

Search Result 370, Processing Time 0.029 seconds

A prognosis discovering lethal-related genes in plants for target identification and inhibitor design (식물 치사관련 유전자를 이용하는 신규 제초제 작용점 탐색 및 조절물질 개발동향)

  • Hwang, I.T.;Lee, D.H.;Choi, J.S.;Kim, T.J.;Kim, B.T.;Park, Y.S.;Cho, K.Y.
    • The Korean Journal of Pesticide Science
    • /
    • v.5 no.3
    • /
    • pp.1-11
    • /
    • 2001
  • New technologies will have a large impact on the discovery of new herbicide site of action. Genomics, combinatorial chemistry, and bioinformatics help take advantage of serendipity through tile sequencing of huge numbers of genes or the synthesis of large numbers of chemical compounds. There are approximately $10^{30}\;to\;10^{50}$ possible molecules in molecular space of which only a fraction have been synthesized. Combining this potential with having access to 50,000 plant genes in the future elevates tile probability of discovering flew herbicidal site of actions. If 0.1, 1.0 or 10% of total genes in a typical plant are valid for herbicide target, a plant with 50,000 genes would provide about 50, 500, and 5,000 targets, respectively. However, only 11 herbicide targets have been identified and commercialized. The successful design of novel herbicides depends on careful consideration of a number of factors including target enzyme selections and validations, inhibitor designs, and the metabolic fates. Biochemical information can be used to identify enzymes which produce lethal phenotypes. The identification of a lethal target site is an important step to this approach. An examination of the characteristics of known targets provides of crucial insight as to the definition of a lethal target. Recently, antisense RNA suppression of an enzyme translation has been used to determine the genes required for toxicity and offers a strategy for identifying lethal target sites. After the identification of a lethal target, detailed knowledge such as the enzyme kinetics and the protein structure may be used to design potent inhibitors. Various types of inhibitors may be designed for a given enzyme. Strategies for the selection of new enzyme targets giving the desired physiological response upon partial inhibition include identification of chemical leads, lethal mutants and the use of antisense technology. Enzyme inhibitors having agrochemical utility can be categorized into six major groups: ground-state analogues, group specific reagents, affinity labels, suicide substrates, reaction intermediate analogues, and extraneous site inhibitors. In this review, examples of each category, and their advantages and disadvantages, will be discussed. The target identification and construction of a potent inhibitor, in itself, may not lead to develop an effective herbicide. The desired in vivo activity, uptake and translocation, and metabolism of the inhibitor should be studied in detail to assess the full potential of the target. Strategies for delivery of the compound to the target enzyme and avoidance of premature detoxification may include a proherbicidal approach, especially when inhibitors are highly charged or when selective detoxification or activation can be exploited. Utilization of differences in detoxification or activation between weeds and crops may lead to enhance selectivity. Without a full appreciation of each of these facets of herbicide design, the chances for success with the target or enzyme-driven approach are reduced.

  • PDF

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

A Regression-Model-based Method for Combining Interestingness Measures of Association Rule Mining (연관상품 추천을 위한 회귀분석모형 기반 연관 규칙 척도 결합기법)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.127-141
    • /
    • 2017
  • Advances in Internet technologies and the proliferation of mobile devices enabled consumers to approach a wide range of goods and services, while causing an adverse effect that they have hard time reaching their congenial items even if they devote much time to searching for them. Accordingly, businesses are using the recommender systems to provide tools for consumers to find the desired items more easily. Association Rule Mining (ARM) technology is advantageous to recommender systems in that ARM provides intuitive form of a rule with interestingness measures (support, confidence, and lift) describing the relationship between items. Given an item, its relevant items can be distinguished with the help of the measures that show the strength of relationship between items. Based on the strength, the most pertinent items can be chosen among other items and exposed to a given item's web page. However, the diversity of the measures may confuse which items are more recommendable. Given two rules, for example, one rule's support and confidence may not be concurrently superior to the other rule's. Such discrepancy of the measures in distinguishing one rule's superiority from other rules may cause difficulty in selecting proper items for recommendation. In addition, in an online environment where a web page or mobile screen can provide a limited number of recommendations that attract consumer interest, the prudent selection of items to be included in the list of recommendations is very important. The exposure of items of little interest may lead consumers to ignore the recommendations. Then, such consumers will possibly not pay attention to other forms of marketing activities. Therefore, the measures should be aligned with the probability of consumer's acceptance of recommendations. For this reason, this study proposes a model-based approach to combine those measures into one unified measure that can consistently determine the ranking of recommended items. A regression model was designed to describe how well the measures (independent variables; i.e., support, confidence, and lift) explain consumer's acceptance of recommendations (dependent variables, hit rate of recommended items). The model is intuitive to understand and easy to use in that the equation consists of the commonly used measures for ARM and can be used in the estimation of hit rates. The experiment using transaction data from one of the Korea's largest online shopping malls was conducted to show that the proposed model can improve the hit rates of recommendations. From the top of the list to 13th place, recommended items in the higher rakings from the proposed model show the higher hit rates than those from the competitive model's. The result shows that the proposed model's performance is superior to the competitive model's in online recommendation environment. In a web page, consumers are provided around ten recommendations with which the proposed model outperforms. Moreover, a mobile device cannot expose many items simultaneously due to its limited screen size. Therefore, the result shows that the newly devised recommendation technique is suitable for the mobile recommender systems. While this study has been conducted to cover the cross-selling in online shopping malls that handle merchandise, the proposed method can be expected to be applied in various situations under which association rules apply. For example, this model can be applied to medical diagnostic systems that predict candidate diseases from a patient's symptoms. To increase the efficiency of the model, additional variables will need to be considered for the elaboration of the model in future studies. For example, price can be a good candidate for an explanatory variable because it has a major impact on consumer purchase decisions. If the prices of recommended items are much higher than the items in which a consumer is interested, the consumer may hesitate to accept the recommendations.

Establishing a Nomogram for Stage IA-IIB Cervical Cancer Patients after Complete Resection

  • Zhou, Hang;Li, Xiong;Zhang, Yuan;Jia, Yao;Hu, Ting;Yang, Ru;Huang, Ke-Cheng;Chen, Zhi-Lan;Wang, Shao-Shuai;Tang, Fang-Xu;Zhou, Jin;Chen, Yi-Le;Wu, Li;Han, Xiao-Bing;Lin, Zhong-Qiu;Lu, Xiao-Mei;Xing, Hui;Qu, Peng-Peng;Cai, Hong-Bing;Song, Xiao-Jie;Tian, Xiao-Yu;Zhang, Qing-Hua;Shen, Jian;Liu, Dan;Wang, Ze-Hua;Xu, Hong-Bing;Wang, Chang-Yu;Xi, Ling;Deng, Dong-Rui;Wang, Hui;Lv, Wei-Guo;Shen, Keng;Wang, Shi-Xuan;Xie, Xing;Cheng, Xiao-Dong;Ma, Ding;Li, Shuang
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.16 no.9
    • /
    • pp.3773-3777
    • /
    • 2015
  • Background: This study aimed to establish a nomogram by combining clinicopathologic factors with overall survival of stage IA-IIB cervical cancer patients after complete resection with pelvic lymphadenectomy. Materials and Methods: This nomogram was based on a retrospective study on 1,563 stage IA-IIB cervical cancer patients who underwent complete resection and lymphadenectomy from 2002 to 2008. The nomogram was constructed based on multivariate analysis using Cox proportional hazard regression. The accuracy and discriminative ability of the nomogram were measured by concordance index (C-index) and calibration curve. Results: Multivariate analysis identified lymph node metastasis (LNM), lymph-vascular space invasion (LVSI), stromal invasion, parametrial invasion, tumor diameter and histology as independent prognostic factors associated with cervical cancer survival. These factors were selected for construction of the nomogram. The C-index of the nomogram was 0.71 (95% CI, 0.65 to 0.77), and calibration of the nomogram showed good agreement between the 5-year predicted survival and the actual observation. Conclusions: We developed a nomogram predicting 5-year overall survival of surgically treated stage IA-IIB cervical cancer patients. More comprehensive information that is provided by this nomogram could provide further insight into personalized therapy selection.

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

Operative Treatment of Congenitally Corrected Transposition of the Great Arteries(CCTGA) (교정형 대혈관 전위증의 수술적 치료)

  • 이정렬;조광리;김용진;노준량;서결필
    • Journal of Chest Surgery
    • /
    • v.32 no.7
    • /
    • pp.621-627
    • /
    • 1999
  • Background: Sixty five cases with congenitally corrected transposition of the great arteries (CCTGA) indicated for biventricular repair were operated on between 1984 and september 1998. Comparison between the results of the conventional(classic) connection(LV-PA) and the anatomic repair was done. Material and Method: Retrospective review was carried out based on the medical records of the patients. Operative procedures, complications and the long-term results accoding to the combining anomalies were analysed. Result: Mean age was 5.5$\pm$4.8 years(range, 2 months to 18years). Thirty nine were male and 26 were female. Situs solitus {S,L,L} was in 53 and situs inversus{I,D,D} in 12. There was no left ventricular outflow tract obstruction(LVOTO) in 13(20%) cases. The LVOTO was resulted from pulmonary stenosis(PS) in 26(40%)patients and from pulmonary atresia(PA) in 26(40%) patients. Twenty-five(38.5%) patients had tricuspid valve regurgitation(TR) greater than the mild degree that was present preoperatively. Twenty two patients previously underwent 24 systemic- pulmonary shunts previously. In the 13 patients without LVOTO, 7 simple closure of VSD or ASD, 3 tricuspid valve replacements(TVR), and 3 anatomic corrections(3 double switch operations: 1 Senning+ Rastelli, 1 Senning+REV-type, and 1 Senning+Arterial switch opera tion) were performed. As to the 26 patients with CCTGA+VSD or ASD+LVOTO(PS), 24 classic repairs and 2 double switch operations(1 Senning+Rastelli, 1 Mustard+REV-type) were done. In the 26 cases with CCTGA+VSD+LVOTO(PA), 19 classic repairs(18 Rastelli, 1 REV-type), and 7 double switch operations(7 Senning+Rastelli) were done. The degree of tricuspid regurgitation increased during the follow-up periods from 1.3$\pm$1.4 to 2.2$\pm$1.0 in the classic repair group(p<0.05), but not in the double switch group. Two patients had complete AV block preoperatively, and additional 7(10.8%) had newly developed complete AV block after the operation. Other complications were recurrent LVOTO(10), thromboembolism(4), persistent chest tube drainage over 2 weeks(4), chylothorax(3), bleeding(3), acute renal failure(2), and mediastinitis(2). Mean follow-up was 54$\pm$49 months(0-177 months). Thirteen patients died after the operation(operative mortality rate: 20.0%(13/65)), and there were 3 additional deaths during the follow up period(overall mortality: 24.6%(16/65)). The operative mortality in patients underwent anatomic repair was 33.3%(4/12). The actuarial survival rates at 1, 5, and 10 years were 75.0$\pm$5.6%, 75.0$\pm$5.6%, and 69.2$\pm$7.6%. Common causes of death were low cardiac output syndrome(8) and heart failure from TR(5). Conclusion: Although our study could not demonstrate the superiority of each classic or anatomic repair, we found that the anatomic repair has a merit of preventing the deterioration of tricuspid valve regurgitations. Meticulous selection of the patients and longer follow-up terms are mandatory to establish the selective advantages of both strategies.

  • PDF

Customer Behavior Prediction of Binary Classification Model Using Unstructured Information and Convolution Neural Network: The Case of Online Storefront (비정형 정보와 CNN 기법을 활용한 이진 분류 모델의 고객 행태 예측: 전자상거래 사례를 중심으로)

  • Kim, Seungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.221-241
    • /
    • 2018
  • Deep learning is getting attention recently. The deep learning technique which had been applied in competitions of the International Conference on Image Recognition Technology(ILSVR) and AlphaGo is Convolution Neural Network(CNN). CNN is characterized in that the input image is divided into small sections to recognize the partial features and combine them to recognize as a whole. Deep learning technologies are expected to bring a lot of changes in our lives, but until now, its applications have been limited to image recognition and natural language processing. The use of deep learning techniques for business problems is still an early research stage. If their performance is proved, they can be applied to traditional business problems such as future marketing response prediction, fraud transaction detection, bankruptcy prediction, and so on. So, it is a very meaningful experiment to diagnose the possibility of solving business problems using deep learning technologies based on the case of online shopping companies which have big data, are relatively easy to identify customer behavior and has high utilization values. Especially, in online shopping companies, the competition environment is rapidly changing and becoming more intense. Therefore, analysis of customer behavior for maximizing profit is becoming more and more important for online shopping companies. In this study, we propose 'CNN model of Heterogeneous Information Integration' using CNN as a way to improve the predictive power of customer behavior in online shopping enterprises. In order to propose a model that optimizes the performance, which is a model that learns from the convolution neural network of the multi-layer perceptron structure by combining structured and unstructured information, this model uses 'heterogeneous information integration', 'unstructured information vector conversion', 'multi-layer perceptron design', and evaluate the performance of each architecture, and confirm the proposed model based on the results. In addition, the target variables for predicting customer behavior are defined as six binary classification problems: re-purchaser, churn, frequent shopper, frequent refund shopper, high amount shopper, high discount shopper. In order to verify the usefulness of the proposed model, we conducted experiments using actual data of domestic specific online shopping company. This experiment uses actual transactions, customers, and VOC data of specific online shopping company in Korea. Data extraction criteria are defined for 47,947 customers who registered at least one VOC in January 2011 (1 month). The customer profiles of these customers, as well as a total of 19 months of trading data from September 2010 to March 2012, and VOCs posted for a month are used. The experiment of this study is divided into two stages. In the first step, we evaluate three architectures that affect the performance of the proposed model and select optimal parameters. We evaluate the performance with the proposed model. Experimental results show that the proposed model, which combines both structured and unstructured information, is superior compared to NBC(Naïve Bayes classification), SVM(Support vector machine), and ANN(Artificial neural network). Therefore, it is significant that the use of unstructured information contributes to predict customer behavior, and that CNN can be applied to solve business problems as well as image recognition and natural language processing problems. It can be confirmed through experiments that CNN is more effective in understanding and interpreting the meaning of context in text VOC data. And it is significant that the empirical research based on the actual data of the e-commerce company can extract very meaningful information from the VOC data written in the text format directly by the customer in the prediction of the customer behavior. Finally, through various experiments, it is possible to say that the proposed model provides useful information for the future research related to the parameter selection and its performance.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Virtuous Concordance of Yin and Yang and Tai-Ji in Joseon art: Focusing on Daesoon Thought (조선 미술에 내재한 음양합덕과 태극 - 대순사상을 중심으로 -)

  • Hwang, Eui-pil
    • Journal of the Daesoon Academy of Sciences
    • /
    • v.35
    • /
    • pp.217-253
    • /
    • 2020
  • This study analyzes the principles of the 'Earthly Paradise' (仙境, the realm of immortals), 'Virtuous Concordance of Yin and Yang' (陰陽合德), and the 'Reordering Works of Heaven and Earth' (天地公事) while combining them with Joseon art. Therefore, this study aims to discover the context wherein the concept of Taiji in 'Daesoon Truth,' deeply penetrates into Joseon art. Doing so reveals how 'Daesoon Thought' is embedded in the lives and customs of the Korean people. In addition, this study follows a review of the sentiments and intellectual traditions of the Korean people based on 'Daesoon Thought' and creative works. Moreover, 'Daesoon Thought' brings all of this to the forefront in academics and art at the cosmological level. The purpose of this research is to vividly reveal the core of 'Daesoon Thought' as a visual image. Through this, the combination of 'Daesoon Thought' and Joseon art will secure both data and reality at the same time. As part of this, this study deals with the world of 'Daesoon Thought' as a cosmological Taiji principle. This concept is revealed in Joseon art, which is analyzed and examined from the viewpoint of art philosophy. First, as a way to make use of 'Daesoon Thought,' 'Daesoon Truth' was developed and directly applied to Joseon art. In this way, reflections on Korean life within 'Daesoon Thought' can be revealed. In this regard, the selection of Joseon art used in this study highlights creative works that have been deeply ingrained into people's lives. For example, as 'Daesoon Thought' appears to focus on the genre painting, folk painting, and landscape painting of the Joseon Dynasty, attention is given to verifying these cases. This study analyzes 'Daesoon Thought,' which borrows from Joseon art, from the perspective of art philosophy. Accordingly, attempts are made to find examples of the 'Virtuous Concordance of Yin and Yang' and Tai-Ji in Joseon art which became a basis by which 'Daesoon Thought' was communicated to people. In addition, appreciating 'Daesoon Thought' in Joseon art is an opportunity to vividly examine not only the Joseon art style but also the life, consciousness, and mental world of the Korean people. As part of this, Chapter 2 made several findings related to the formation of 'Daesoon Thought.' In Chapter 3, the structures of the ideas of 'Earthly Paradise' and 'Virtuous Concordance of Yin and Yang' were likewise found to have support. And 'The Reordering Works of Heaven and Earth' and Tai-Ji were found in depictions of metaphysical laws. To this end, the laws of 'The Reordering Works of Heaven and Earth' and the structure of Tai-Ji were combined. In chapter 4, we analyzed the 'Daesoon Thought' in the life and work of the Korean people at the level of the convergence of 'Daeesoon Thought' and Joseon art. The analysis of works provides a glimpse into the precise identity of 'Daesoon Thought' as observable in Joseon art, as doing so is useful for generating empirical data. For example, works such as Tai-Jido, Ssanggeum Daemu, Jusachaebujeokdo, Hwajogi Myeonghwabundo, and Gyeongdodo are objects that inspired descriptions of 'Earthly Paradise', 'Virtuous Concordance of Yin and Yang,' and 'The Reordering Works of Heaven and Earth.' As a result, Tai-Ji which appears in 'Daesoon Thought', proved the status of people in Joseon art. Given all of these statements, the Tai-Ji idea pursued by Daesoon Thought is a providence that follows change as all things are mutually created. In other words, it was derived that Tai-Ji ideology sits profoundly in the lives of the Korean people and responds mutually to the providence that converges with 'Mutual Beneficence.'

A Study on Forestation for Landscaping around the Lakes in the Upper Watersheds of North Han River (북한강상류수계(北漢江上流水系)의 호수단지주변삼림(湖水団地周辺森林)의 풍경적시업(風景的施業)에 관(関)한 연구(硏究))

  • Ho, Ul Yeong
    • Journal of Korean Society of Forest Science
    • /
    • v.54 no.1
    • /
    • pp.1-24
    • /
    • 1981
  • Kangweon-Do is rich in sightseeing resources. There are three sightseeing areas;first, mountain area including Seolak and Ohdae National Parks, and chiak Provincial Park; second eastern coastal area; third lake area including the watersheds of North Han River. In this paper, several methods of forestation were studied for landscaping the North Han River watersheds centering around Chounchon. In Chunchon lake complex, there are four lakes; Uiam, Chunchon, Soyang and Paro from down to upper stream. The total surface area of the above four lakes is $14.4km^2$ the total pondage of them 4,155 million $m^3$, the total generation of electric power of them 410 thousand Kw, and the total forest area bordering on them $1,208km^2$. The bordering forest consists of planned management forest ($745km^2$) and non-planned management forest ($463km^2$). The latter is divided into green belt zone, natural conservation area, and protection forest. The forest in green belt amounts to $177km^2$ and centers around the 10km radios from Chunchon. The forest in natural conservation area amounts to $165km^2$, which is established within 2km sight range from the Soyang-lake sides. Protection forest surrounding the lakes is $121km^2$ There are many scenic places, recreation gardens, cultural goods and ruins in this lake complex, which are the same good tourist resources as lakes and forest. The forest encirelng the lakes has the poor average growing stock of $15m^3/ha$, because 70% of the forest consists of the young plantation of 1 to 2 age class. The ration of the needle-leaved forest, the broad-leaved forest and the mixed forest in 35:37:28. From the standpoint of ownership, the forest consists of national forest (36%), provincial forest (14%), Gun forest (5%) and private forest(45%). The greater part of the forest soil, originated from granite and gneiss, is much liable to weathering. Because the surface soil is mostly sterile, the fertilization for improving the soil quality is strongly urged. Considering the above-mentioned, the forestation methods for improving landscape of the North Han River Watersheds are suggested as follows: 1) The mature-stage forest should be induced by means of fertilizing and tendering, as the forest in this area is the young plantation with poor soil. 2) The bare land should be afforested by planting the rapid growing species, such as rigida pine, alder, and etc. 3) The bare land in the canyon with moderate moist and comparatively rich soil should be planted with Korean-pine, larch, ro fir. 4) Japaness-pine stand should be changed into Korean-pine, fir, spruce or hemlock stand from ravine to top gradually, because the Japanese-pine has poor capacity of water conservation and great liability to pine gall midge. 5) Present hard-wood forest, consisting of miscellaneous trees comparatively less valuable from the point of wood quality and scenerity, should be change into oak, maple, fraxinus-rhynchophylla, birch or juglan stand which is comparatively more valuable. 6) In the mountain foot within the sight-range, stands should be established with such species as cherry, weeping willow, white poplar, machilus, maiden-hair tree, juniper, chestnut or apricot. 7) The regeneration of some broad-leaved forests should be induced to the middle forest type, leading to the harmonious arrangement of the two storied forest and the coppice. 8) For the preservation of scenery, the reproduction of the soft-wood forest should be done under the selection method or the shelter-wood system. 9) Mixed forest should be regenerated under the middle forest system with upper needle-leaved forest and lower broad-leaved forest. In brief, the nature's mysteriousness should be conserved by combining the womanly elegance of the lakes and the manly grandeur of the forest.

  • PDF