• Title/Summary/Keyword: Hybrid systems

Search Result 2,642, Processing Time 0.035 seconds

The nanoleakage patterns of experimental hydrophobic adhesives after load cycling (Load cycling에 따른 소수성 실험용 상아질 접착제의 nanoleakage 양상)

  • Sohn, Suh-Jin;Chang, Ju-Hae;Kang, Suk-Ho;Yoo, Hyun-Mi;Cho, Byeong-Hoon;Son, Ho-Hyun
    • Restorative Dentistry and Endodontics
    • /
    • v.33 no.1
    • /
    • pp.9-19
    • /
    • 2008
  • The purpose of this study was: (1) to compare nanoleakage patterns of a conventional 3-step etch and rinse adhesive system and two experimental hydrophobic adhesive systems and (2) to investigate the change of the nanoleakage patterns after load cycling. Two kinds of hydrophobic experimental adhesives, ethanol containing adhesive (EA) and methanol containing adhesive (MA), were prepared. Thirty extracted human molars were embedded in resin blocks and occlusal thirds of the crowns were removed. The polished dentin surfaces were etched with a 35 % phosphoric acid etching gel and rinsed with water. Scotchbond Multi-Purpose (MP), EA and MA were used for bonding procedure. Z-250 composite resin was built-up on the adhesive-treated surfaces. Five teeth of each dentin adhesive group were subjected to mechanical load cycling. The teeth were sectioned into 2 mm thick slabs and then stained with 50 % ammoniacal silver nitrate. Ten specimens for each group were examined under scanning electron microscope in backscattering electron mode. All photographs were analyzed using image analysis software. Three regions of each specimen were used for evaluation of the silver uptake within the hybrid layer. The area of silver deposition was calculated and expressed in gray value. Data were statistically analyzed by two-way ANOVA and post-hoc testing of multiple comparisons was done with the Scheffe's test. Silver particles were observed in all the groups. However, silver particles were more sparsely distributed in the EA group and the MA group than in the MP group (p < .0001). There were no changes in nanoleakage patterns after load cycling.

Studies on the Physiological Root Activity and Its Related Characteristics of Rice Varieties for Application to Rice Breeding (수도근의 생리적 활력 및 그 관련형질의 품종차이와 육종상의 이용에 관한 연구)

  • Rae-Kyung Park
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.18
    • /
    • pp.28-53
    • /
    • 1975
  • Experiments on the physiological root activity and its related characteristics of rice varieties were carried out in order to obtain some basic informations for the application of the results obtained to a rice breeding program. A significant positive correlation was found not only among the various characteristics related to conducting and ventilating systems which connects top and root of rice plant, but also between these characteristics and root activity. On the other hand, a significant difference in physiological root activity was recognized among different varieties and also between different groups of recognized 7 rice varieties differing in the their origin. It was also found that varieties with higher root activity (root activity indices) after ear formation stage tended to have more number of lower green leaves and consequently resulted in higher grain yield. Therefore, it may be possible to diagnose indirectly the root activity by examining the number of green leaves of the rice plant at later growth stage when breeders make selections of parent material for crossing or of hybrid lines in pedigree nurseries.

  • PDF

Recommending Core and Connecting Keywords of Research Area Using Social Network and Data Mining Techniques (소셜 네트워크와 데이터 마이닝 기법을 활용한 학문 분야 중심 및 융합 키워드 추천 서비스)

  • Cho, In-Dong;Kim, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.127-138
    • /
    • 2011
  • The core service of most research portal sites is providing relevant research papers to various researchers that match their research interests. This kind of service may only be effective and easy to use when a user can provide correct and concrete information about a paper such as the title, authors, and keywords. However, unfortunately, most users of this service are not acquainted with concrete bibliographic information. It implies that most users inevitably experience repeated trial and error attempts of keyword-based search. Especially, retrieving a relevant research paper is more difficult when a user is novice in the research domain and does not know appropriate keywords. In this case, a user should perform iterative searches as follows : i) perform an initial search with an arbitrary keyword, ii) acquire related keywords from the retrieved papers, and iii) perform another search again with the acquired keywords. This usage pattern implies that the level of service quality and user satisfaction of a portal site are strongly affected by the level of keyword management and searching mechanism. To overcome this kind of inefficiency, some leading research portal sites adopt the association rule mining-based keyword recommendation service that is similar to the product recommendation of online shopping malls. However, keyword recommendation only based on association analysis has limitation that it can show only a simple and direct relationship between two keywords. In other words, the association analysis itself is unable to present the complex relationships among many keywords in some adjacent research areas. To overcome this limitation, we propose the hybrid approach for establishing association network among keywords used in research papers. The keyword association network can be established by the following phases : i) a set of keywords specified in a certain paper are regarded as co-purchased items, ii) perform association analysis for the keywords and extract frequent patterns of keywords that satisfy predefined thresholds of confidence, support, and lift, and iii) schematize the frequent keyword patterns as a network to show the core keywords of each research area and connecting keywords among two or more research areas. To estimate the practical application of our approach, we performed a simple experiment with 600 keywords. The keywords are extracted from 131 research papers published in five prominent Korean journals in 2009. In the experiment, we used the SAS Enterprise Miner for association analysis and the R software for social network analysis. As the final outcome, we presented a network diagram and a cluster dendrogram for the keyword association network. We summarized the results in Section 4 of this paper. The main contribution of our proposed approach can be found in the following aspects : i) the keyword network can provide an initial roadmap of a research area to researchers who are novice in the domain, ii) a researcher can grasp the distribution of many keywords neighboring to a certain keyword, and iii) researchers can get some idea for converging different research areas by observing connecting keywords in the keyword association network. Further studies should include the following. First, the current version of our approach does not implement a standard meta-dictionary. For practical use, homonyms, synonyms, and multilingual problems should be resolved with a standard meta-dictionary. Additionally, more clear guidelines for clustering research areas and defining core and connecting keywords should be provided. Finally, intensive experiments not only on Korean research papers but also on international papers should be performed in further studies.

Development trend of the mushroom industry (버섯 산업의 발달 동향)

  • Yoo, Young Bok;Oh, Min Ji;Oh, Youn Lee;Shin, Pyung Gyun;Jang, Kab Yeul;Kong, Won Sik
    • Journal of Mushroom
    • /
    • v.14 no.4
    • /
    • pp.142-154
    • /
    • 2016
  • Worldwide production of mushrooms has been increasing by 10-20% every year. Recently, Pleurotus eryngii and P. nebrodensis have become popular mushroom species for cultivation. In particular, China exceeded 8.7 million tons in 2002, which accounted for 71.5% of total world output. A similar trend was also observed in Korea. Two kinds of mushrooms-Gumji (金芝; Ganoderma) and Seoji-are described in the ancient book 'Samguksagi' (History of the three kingdoms; B.C 57~A.D 668; written by Bu Sik Kim in 1145) during the Korea-dynasty. Many kinds of mushrooms are also described in more than 17 ancient books during the Chosun-dynasty (1392~1910) in Korea. Approximately 200 commercial strains of 38 species of mushrooms were developed and distributed to cultivators. The somatic hybrid variety of oyster mushroom, 'Wonhyeong-neutari,' was developed by protoplast fusion, and distributed to growers in 1989. Further, the production of mushrooms as food was 199,829 metric tons, valued at 850 billion Korean Won (one trillion won if mushroom factory products are included) in 2015. In Korea, the major cultivated species are P. ostreatus, P. eryngii, Flammulina velutipes, Lentinula edodes, Agaricus bisporus, and Ganoderma lucidum, which account for 90% of the total production. Since mushroom export was initiated in 1960, the export and import of mushrooms have increased in Korea. Technology was developed for liquid spawn production, and automatic cultivation systems led to the reduction of production cost, resulting in the increase in mushroom export. However, some species were imported owing to high production costs for effective cultivation methods. In academia, RDA scientists have conducted mushroom genome projects since 1997. One of the main outcomes is the whole genome sequencing of Flammulina velutipes for molecular breeding. With regard to medicinal mushrooms, we have been conducting genome research on Cordyceps and its related species for developing functional foods. There are various kinds of beneficial substances in mushrooms; mushroom products, including pharmaceuticals, tonics, healthy beverages, functional biotransformants, and processed foods have also became available on the market. In addition, compost and feed can likewise be made from mushroom substrates after harvest.

Development of Neural Network Based Cycle Length Design Model Minimizing Delay for Traffic Responsive Control (실시간 신호제어를 위한 신경망 적용 지체최소화 주기길이 설계모형 개발)

  • Lee, Jung-Youn;Kim, Jin-Tae;Chang, Myung-Soon
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.3 s.74
    • /
    • pp.145-157
    • /
    • 2004
  • The cycle length design model of the Korean traffic responsive signal control systems is devised to vary a cycle length as a response to changes in traffic demand in real time by utilizing parameters specified by a system operator and such field information as degrees of saturation of through phases. Since no explicit guideline is provided to a system operator, the system tends to include ambiguity in terms of the system optimization. In addition, the cycle lengths produced by the existing model have yet been verified if they are comparable to the ones minimizing delay. This paper presents the studies conducted (1) to find shortcomings embedded in the existing model by comparing the cycle lengths produced by the model against the ones minimizing delay and (2) to propose a new direction to design a cycle length minimizing delay and excluding such operator oriented parameters. It was found from the study that the cycle lengths from the existing model fail to minimize delay and promote intersection operational conditions to be unsatisfied when traffic volume is low, due to the feature of the changed target operational volume-to-capacity ratio embedded in the model. The 64 different neural network based cycle length design models were developed based on simulation data surrogating field data. The CORSIM optimal cycle lengths minimizing delay were found through the COST software developed for the study. COST searches for the CORSIM optimal cycle length minimizing delay with a heuristic searching method, a hybrid genetic algorithm. Among 64 models, the best one producing cycle lengths close enough to the optimal was selected through statistical tests. It was found from the verification test that the best model designs a cycle length as similar pattern to the ones minimizing delay. The cycle lengths from the proposed model are comparable to the ones from TRANSYT-7F.

MICROLEAKAGE OF MICROFILL AND FLOWABLE COMPOSITE RESINS IN CLASS V CAVITY AFTER LOAD CYCLING (Flowable 및 microfill 복합레진으로 충전된 제 5급와동에서 load cycling 전,후의 미세변연누출 비교)

  • Kang, Suk-Ho;Kim, Oh-Young;Oh, Myung-Hwan;Cho, Byeong-Hoon;Um, Chung-Moon;Kwon, Hyuk-Choon;Son, Ho-Hyun
    • Restorative Dentistry and Endodontics
    • /
    • v.27 no.2
    • /
    • pp.142-149
    • /
    • 2002
  • Low-viscosity composite resins may produce better sealed margins than stiffer compositions (KempScholte and Davidson, 1988: Crim, 1989). Plowable composites have been recommended for use in Class V cavities but it is also controversial because of its high rates of shrinkage. On the other hand, in the study comparing elastic moduli and leakage, the microfill had the least leakage (Rundle et at. 1997) Furthermore, in the 1996 survey of the Reality Editorial Team, microfills were the clear choice for abfraction lesions. The purpose of this study was to evaluate the microleakage of 6 compostite resins (2 hybrids, 2 microfills, and 2 flowable composites) with and without load cycling. Notch-shaped Class V cavities were prepared on buccal surface of 180 extracted human upper premolars on cementum margin. The teeth were randomly divided into non-load cycling group (group 1) and load cycling group (group 2) of 90 teeth each. The experimental teeth of each group were randomly divided into 6 subgroups of 15 samples. All preparations were etched, and Single bond was applied. Preparations were restored with the following materials (n=15) : hybrid composite resin [Z250(3M Dental Products Inc. St. Paul, USA), Denfil(Vericom, Ahnyang, Korea)], microfill [Heliomolar RO(Vivadent, Schaan, Liechtenstein), Micronew(Bisco Inc. Schaumburg, IL, USA)], and flowable composite[AeliteFlo(Bisco Inc. Schaumburg, IL, USA), Revolution(Kerr Corp. Orange, CA, USA)]. Teeth of group 2 were subjected to occlusal load (100N for 50,000 cycles) using chewing simulator(MTS 858 Mini Bionix II system, MTS Systems Corp. Minn. USA). All samples were coated with nail polish 1mm short of the restoration, placed in 2% methylene blue for 24 hours, and sectioned with a diamond wheel. Enamel and dentin/cementum margins were analyzed for microleakage on a sclale of 0 (no leakage) to 3 (3/3 of wall). Results were statistically analyzed by Kruscal-Wallis One way analysis, Mann-Whitney U-test, and Student-Newmann-Keuls method. (p = 0.05) Results : 1. There was significantly less microleage in enamel margins than dentinal margins of all groups (p<0.05) 2. There was no significant between six composite resin in enamel margin of group 1. 3. In dentin margin of group 1, flowable composite had more microleakage than others but not of significant differences. 4. there was no significant difference between six composite resin in enamel margin of group 2. 5. In dentin margin of group 2, the microleakage were R>A =H=M>D>Z. But there was no significant differences. 6. In enamel margins, load cycling did not affect the marginal microleakage in significant degree. 7. In enamel margins, load cycling did affect the marginal microleakage only in Revolution. (p<0.05).

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

Deep Learning OCR based document processing platform and its application in financial domain (금융 특화 딥러닝 광학문자인식 기반 문서 처리 플랫폼 구축 및 금융권 내 활용)

  • Dongyoung Kim;Doohyung Kim;Myungsung Kwak;Hyunsoo Son;Dongwon Sohn;Mingi Lim;Yeji Shin;Hyeonjung Lee;Chandong Park;Mihyang Kim;Dongwon Choi
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.143-174
    • /
    • 2023
  • With the development of deep learning technologies, Artificial Intelligence powered Optical Character Recognition (AI-OCR) has evolved to read multiple languages from various forms of images accurately. For the financial industry, where a large number of diverse documents are processed through manpower, the potential for using AI-OCR is great. In this study, we present a configuration and a design of an AI-OCR modality for use in the financial industry and discuss the platform construction with application cases. Since the use of financial domain data is prohibited under the Personal Information Protection Act, we developed a deep learning-based data generation approach and used it to train the AI-OCR models. The AI-OCR models are trained for image preprocessing, text recognition, and language processing and are configured as a microservice architected platform to process a broad variety of documents. We have demonstrated the AI-OCR platform by applying it to financial domain tasks of document sorting, document verification, and typing assistance The demonstrations confirm the increasing work efficiency and conveniences.

Development and application of prediction model of hyperlipidemia using SVM and meta-learning algorithm (SVM과 meta-learning algorithm을 이용한 고지혈증 유병 예측모형 개발과 활용)

  • Lee, Seulki;Shin, Taeksoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.111-124
    • /
    • 2018
  • This study aims to develop a classification model for predicting the occurrence of hyperlipidemia, one of the chronic diseases. Prior studies applying data mining techniques for predicting disease can be classified into a model design study for predicting cardiovascular disease and a study comparing disease prediction research results. In the case of foreign literatures, studies predicting cardiovascular disease were predominant in predicting disease using data mining techniques. Although domestic studies were not much different from those of foreign countries, studies focusing on hypertension and diabetes were mainly conducted. Since hypertension and diabetes as well as chronic diseases, hyperlipidemia, are also of high importance, this study selected hyperlipidemia as the disease to be analyzed. We also developed a model for predicting hyperlipidemia using SVM and meta learning algorithms, which are already known to have excellent predictive power. In order to achieve the purpose of this study, we used data set from Korea Health Panel 2012. The Korean Health Panel produces basic data on the level of health expenditure, health level and health behavior, and has conducted an annual survey since 2008. In this study, 1,088 patients with hyperlipidemia were randomly selected from the hospitalized, outpatient, emergency, and chronic disease data of the Korean Health Panel in 2012, and 1,088 nonpatients were also randomly extracted. A total of 2,176 people were selected for the study. Three methods were used to select input variables for predicting hyperlipidemia. First, stepwise method was performed using logistic regression. Among the 17 variables, the categorical variables(except for length of smoking) are expressed as dummy variables, which are assumed to be separate variables on the basis of the reference group, and these variables were analyzed. Six variables (age, BMI, education level, marital status, smoking status, gender) excluding income level and smoking period were selected based on significance level 0.1. Second, C4.5 as a decision tree algorithm is used. The significant input variables were age, smoking status, and education level. Finally, C4.5 as a decision tree algorithm is used. In SVM, the input variables selected by genetic algorithms consisted of 6 variables such as age, marital status, education level, economic activity, smoking period, and physical activity status, and the input variables selected by genetic algorithms in artificial neural network consist of 3 variables such as age, marital status, and education level. Based on the selected parameters, we compared SVM, meta learning algorithm and other prediction models for hyperlipidemia patients, and compared the classification performances using TP rate and precision. The main results of the analysis are as follows. First, the accuracy of the SVM was 88.4% and the accuracy of the artificial neural network was 86.7%. Second, the accuracy of classification models using the selected input variables through stepwise method was slightly higher than that of classification models using the whole variables. Third, the precision of artificial neural network was higher than that of SVM when only three variables as input variables were selected by decision trees. As a result of classification models based on the input variables selected through the genetic algorithm, classification accuracy of SVM was 88.5% and that of artificial neural network was 87.9%. Finally, this study indicated that stacking as the meta learning algorithm proposed in this study, has the best performance when it uses the predicted outputs of SVM and MLP as input variables of SVM, which is a meta classifier. The purpose of this study was to predict hyperlipidemia, one of the representative chronic diseases. To do this, we used SVM and meta-learning algorithms, which is known to have high accuracy. As a result, the accuracy of classification of hyperlipidemia in the stacking as a meta learner was higher than other meta-learning algorithms. However, the predictive performance of the meta-learning algorithm proposed in this study is the same as that of SVM with the best performance (88.6%) among the single models. The limitations of this study are as follows. First, various variable selection methods were tried, but most variables used in the study were categorical dummy variables. In the case with a large number of categorical variables, the results may be different if continuous variables are used because the model can be better suited to categorical variables such as decision trees than general models such as neural networks. Despite these limitations, this study has significance in predicting hyperlipidemia with hybrid models such as met learning algorithms which have not been studied previously. It can be said that the result of improving the model accuracy by applying various variable selection techniques is meaningful. In addition, it is expected that our proposed model will be effective for the prevention and management of hyperlipidemia.

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.