• Title/Summary/Keyword: flow information

Search Result 5,726, Processing Time 0.036 seconds

Development of Intelligent ATP System Using Genetic Algorithm (유전 알고리듬을 적용한 지능형 ATP 시스템 개발)

  • Kim, Tai-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.131-145
    • /
    • 2010
  • The framework for making a coordinated decision for large-scale facilities has become an important issue in supply chain(SC) management research. The competitive business environment requires companies to continuously search for the ways to achieve high efficiency and lower operational costs. In the areas of production/distribution planning, many researchers and practitioners have developedand evaluated the deterministic models to coordinate important and interrelated logistic decisions such as capacity management, inventory allocation, and vehicle routing. They initially have investigated the various process of SC separately and later become more interested in such problems encompassing the whole SC system. The accurate quotation of ATP(Available-To-Promise) plays a very important role in enhancing customer satisfaction and fill rate maximization. The complexity for intelligent manufacturing system, which includes all the linkages among procurement, production, and distribution, makes the accurate quotation of ATP be a quite difficult job. In addition to, many researchers assumed ATP model with integer time. However, in industry practices, integer times are very rare and the model developed using integer times is therefore approximating the real system. Various alternative models for an ATP system with time lags have been developed and evaluated. In most cases, these models have assumed that the time lags are integer multiples of a unit time grid. However, integer time lags are very rare in practices, and therefore models developed using integer time lags only approximate real systems. The differences occurring by this approximation frequently result in significant accuracy degradations. To introduce the ATP model with time lags, we first introduce the dynamic production function. Hackman and Leachman's dynamic production function in initiated research directly related to the topic of this paper. They propose a modeling framework for a system with non-integer time lags and show how to apply the framework to a variety of systems including continues time series, manufacturing resource planning and critical path method. Their formulation requires no additional variables or constraints and is capable of representing real world systems more accurately. Previously, to cope with non-integer time lags, they usually model a concerned system either by rounding lags to the nearest integers or by subdividing the time grid to make the lags become integer multiples of the grid. But each approach has a critical weakness: the first approach underestimates, potentially leading to infeasibilities or overestimates lead times, potentially resulting in excessive work-inprocesses. The second approach drastically inflates the problem size. We consider an optimized ATP system with non-integer time lag in supply chain management. We focus on a worldwide headquarter, distribution centers, and manufacturing facilities are globally networked. We develop a mixed integer programming(MIP) model for ATP process, which has the definition of required data flow. The illustrative ATP module shows the proposed system is largely affected inSCM. The system we are concerned is composed of a multiple production facility with multiple products, multiple distribution centers and multiple customers. For the system, we consider an ATP scheduling and capacity allocationproblem. In this study, we proposed the model for the ATP system in SCM using the dynamic production function considering the non-integer time lags. The model is developed under the framework suitable for the non-integer lags and, therefore, is more accurate than the models we usually encounter. We developed intelligent ATP System for this model using genetic algorithm. We focus on a capacitated production planning and capacity allocation problem, develop a mixed integer programming model, and propose an efficient heuristic procedure using an evolutionary system to solve it efficiently. This method makes it possible for the population to reach the approximate solution easily. Moreover, we designed and utilized a representation scheme that allows the proposed models to represent real variables. The proposed regeneration procedures, which evaluate each infeasible chromosome, makes the solutions converge to the optimum quickly.

An Oceanic Current Map of the East Sea for Science Textbooks Based on Scientific Knowledge Acquired from Oceanic Measurements (해양관측을 통해 획득된 과학적 지식에 기반한 과학교과서 동해 해류도)

  • Park, Kyung-Ae;Park, Ji-Eun;Choi, Byoung-Ju;Byun, Do-Seong;Lee, Eun-Il
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.18 no.4
    • /
    • pp.234-265
    • /
    • 2013
  • Oceanic current maps in the secondary school science and earth science textbooks have played an important role in piquing students's inquisitiveness and interests in the ocean. Such maps can provide students with important opportunities to learn about oceanic currents relevant to abrupt climate change and global energy balance issues. Nevertheless, serious and diverse errors in these secondary school oceanic current maps have been discovered upon comparison with up-to-date scientific knowledge concerning oceanic currents. This study presents the fundamental methods and strategies for constructing such maps error-free, through the unification of the diverse current maps currently in the textbooks. In order to do so, we analyzed the maps found in 27 different textbooks and compared them with other up-to-date maps found in scientific journals, and developed a mapping technique for extracting digitalized quantitative information on warm and cold currents in the East Sea. We devised analysis items for the current visualization in relation to the branching features of the Tsushima Warm Current (TWC) in the Korea Strait. These analysis items include: its nearshore and offshore branches, the northern limit and distance from the coast of the East Korea Warm Current, outflow features of the TWC near the Tsugaru and Soya Straits and their returning currents, and flow patterns of the Liman Cold Current and the North Korea Cold Current. The first draft of the current map was constructed based upon the scientific knowledge and input of oceanographers based on oceanic in-situ measurements, and was corrected with the help of a questionnaire survey to the members of an oceanographic society. In addition, diverse comments have been collected from a special session of the 2013 spring meeting of the Korean Oceanographic Society to assist in the construction of an accurate current map of the East Sea which has been corrected repeatedly through in-depth discussions with oceanographers. Finally, we have obtained constructive comments and evaluations of the interim version of the current map from several well-known ocean current experts and incorporated their input to complete the map's final version. To avoid errors in the production of oceanic current maps in future textbooks, we provide the geolocation information (latitude and longitude) of the currents by digitalizing the map. This study is expected to be the first step towards the completion of an oceanographic current map suitable for secondary school textbooks, and to encourage oceanographers to take more interest in oceanic education.

International and domestic research trends in longitudinal connectivity evaluations of aquatic ecosystems, and the applicability analysis of fish-based models (수생태계 종적 연결성 평가를 위한 국내외 연구 현황 및 어류기반 종적 연속성 평가모델 적용성 분석)

  • Kim, Ji Yoon;Kim, Jai-Gu;Bae, Dae-Yeul;Kim, Hye-Jin;Kim, Jeong-Eun;Lee, Ho-Seong;Lim, Jun-Young;An, Kwang-Guk
    • Korean Journal of Environmental Biology
    • /
    • v.38 no.4
    • /
    • pp.634-649
    • /
    • 2020
  • Recently, stream longitudinal connectivity has been a topic of investigation due to the frequent disconnections and the impact of aquatic ecosystems caused by the construction of small and medium-sized weirs and various artificial structures (fishways) directly influencing the stream ecosystem health. In this study, the international and domestic research trends of the longitudinal connectivity in aquatic ecosystems were evaluated and the applicability of fish-based longitudinal connectivity models used in developed countries was analyzed. For these purposes, we analyzed the current status of research on longitudinal connectivity and structural problems, fish monitoring methodology, monitoring approaches, longitudinal disconnectivity of fish movement, and biodiversity. In addition, we analyzed the current status and some technical limitations of physical habitat suitability evaluation, ecology-based water flow, eco-hydrological modeling for fish habitat connectivity, and the s/w program development for agent-based model. Numerous references, data, and various reports were examined to identify worldwide longitudinal stream connectivity evaluation models in European and non-European countries. The international approaches to longitudinal connectivity evaluations were categorized into five phases including 1) an approach integrating fish community and artificial structure surveys (two types input variables), 2) field monitoring approaches, 3) a stream geomorphological approach, 4) an artificial structure-based DB analytical approach, and 5) other approaches. the overall evaluation of survey methodologies and applicability for longitudinal stream connectivity suggested that the ICE model (Information sur la Continuite Ecologique) and the ICF model (Index de Connectivitat Fluvial), widely used in European countries, were appropriate for the application of longitudinal connectivity evaluations in Korean streams.

A Proposal of Direction of Wind Ventilation Forest through Urban Condition Analysis - A Case Study of Pyeongtaek-si - (도시 여건 분석을 통한 바람길숲 조성방향 제시 - 평택시를 사례로 -)

  • SON, Jeong-Min;EUM, Jeong-Hee;SUNG, Uk-Je;BAEK, Jun-Beom;KIM, Ju-Eun;OH, Jeong-Hak
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.23 no.4
    • /
    • pp.101-119
    • /
    • 2020
  • Recently, as a plan to improve the particulate matter and thermal environment in the city, urban forests acting as wind ventilation corridor(wind ventilation forest) are promoted nationwide. This study analyzed the conditions for the creation of wind ventilation forest(vulnerable areas of the particulate matter and thermal environment, distribution of wind ventilation forest, characteristics of ventilation corridor) of in Pyeongtae-si, one of the target cities of wind ventilation forest project. Based on the results, the direction of developing on the wind ventilation forest in Pyeongtaek-si was suggested. As a result of deriving areas vulnerable to particulate matter and thermal environment, it was most vulnerable in urban areas in the eastern area of Pyeongtaek-si. Especially, emissions were high from industrial complexes and roads such as the Pyeongtaek-si thermal power plant, ports, and the national road no. 1. The wind ventilation forest in Pyeongtaek-si was distributed with small-scale windgenerating forests, wind-spreading forests, and wind-connection forests fragmented and disconnected. The characteristic of the overall wind ventilation corridor in Pyeongtaek-si is that the cold air generated from Mt.Mubong, etc., strongly flowed into Pyeongtaek-si and flowed in the northwest direction. Therefore, it is necessary to preserve and expand the wind-generating forests in Pyeongtaek-si in the long term, and it was important to create wind-spreading forests and wind-connection forests so that cold air could flow into the vulnerable area. In addition, in industrial complexes and roads where particulate matter is generated, planting techniques should be applied to prevent the spread of particulate matte to surrounding areas by creating wind-spreading forests considering the particulate matter blocking. This study can be used not only as the basis data for wind ventilation forest project in Pyeongtaek-si, but also as the basis data for urban forest creation and management.

Knowledge graph-based knowledge map for efficient expression and inference of associated knowledge (연관지식의 효율적인 표현 및 추론이 가능한 지식그래프 기반 지식지도)

  • Yoo, Keedong
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.49-71
    • /
    • 2021
  • Users who intend to utilize knowledge to actively solve given problems proceed their jobs with cross- and sequential exploration of associated knowledge related each other in terms of certain criteria, such as content relevance. A knowledge map is the diagram or taxonomy overviewing status of currently managed knowledge in a knowledge-base, and supports users' knowledge exploration based on certain relationships between knowledge. A knowledge map, therefore, must be expressed in a networked form by linking related knowledge based on certain types of relationships, and should be implemented by deploying proper technologies or tools specialized in defining and inferring them. To meet this end, this study suggests a methodology for developing the knowledge graph-based knowledge map using the Graph DB known to exhibit proper functionality in expressing and inferring relationships between entities and their relationships stored in a knowledge-base. Procedures of the proposed methodology are modeling graph data, creating nodes, properties, relationships, and composing knowledge networks by combining identified links between knowledge. Among various Graph DBs, the Neo4j is used in this study for its high credibility and applicability through wide and various application cases. To examine the validity of the proposed methodology, a knowledge graph-based knowledge map is implemented deploying the Graph DB, and a performance comparison test is performed, by applying previous research's data to check whether this study's knowledge map can yield the same level of performance as the previous one did. Previous research's case is concerned with building a process-based knowledge map using the ontology technology, which identifies links between related knowledge based on the sequences of tasks producing or being activated by knowledge. In other words, since a task not only is activated by knowledge as an input but also produces knowledge as an output, input and output knowledge are linked as a flow by the task. Also since a business process is composed of affiliated tasks to fulfill the purpose of the process, the knowledge networks within a business process can be concluded by the sequences of the tasks composing the process. Therefore, using the Neo4j, considered process, task, and knowledge as well as the relationships among them are defined as nodes and relationships so that knowledge links can be identified based on the sequences of tasks. The resultant knowledge network by aggregating identified knowledge links is the knowledge map equipping functionality as a knowledge graph, and therefore its performance needs to be tested whether it meets the level of previous research's validation results. The performance test examines two aspects, the correctness of knowledge links and the possibility of inferring new types of knowledge: the former is examined using 7 questions, and the latter is checked by extracting two new-typed knowledge. As a result, the knowledge map constructed through the proposed methodology has showed the same level of performance as the previous one, and processed knowledge definition as well as knowledge relationship inference in a more efficient manner. Furthermore, comparing to the previous research's ontology-based approach, this study's Graph DB-based approach has also showed more beneficial functionality in intensively managing only the knowledge of interest, dynamically defining knowledge and relationships by reflecting various meanings from situations to purposes, agilely inferring knowledge and relationships through Cypher-based query, and easily creating a new relationship by aggregating existing ones, etc. This study's artifacts can be applied to implement the user-friendly function of knowledge exploration reflecting user's cognitive process toward associated knowledge, and can further underpin the development of an intelligent knowledge-base expanding autonomously through the discovery of new knowledge and their relationships by inference. This study, moreover than these, has an instant effect on implementing the networked knowledge map essential to satisfying contemporary users eagerly excavating the way to find proper knowledge to use.

Evaluation of Space-based Wetland InSAR Observations with ALOS-2 ScanSAR Mode (습지대 변화 관측을 위한 ALOS-2 광대역 모드 적용 연구)

  • Hong, Sang-Hoon;Wdowinski, Shimon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_1
    • /
    • pp.447-460
    • /
    • 2022
  • It is well known that satellite synthetic aperture radar interferometry (InSAR) has been widely used for the observation of surface displacement owing to earthquakes, volcanoes, and subsidence very precisely. In wetlands where vegetation exists on the surface of the water, it is possible to create a water level change map with high spatial resolution over a wide area using the InSAR technique. Currently, a number of imaging radar satellites are in operation, and most of them support a ScanSAR mode observation to gather information over a large area at once. The Cienaga Grande de Santa Marta (CGSM) wetland, located in northern Colombia, is a vast wetland developed along the Caribbean coast. The CGSM wetlands face serious environmental threats from human activities such as reclamation for agricultural uses and residential purposes as well as natural causes such as sea level rise owing to climate change. Various restoration and protection plans have been conducted to conserve these invaluable environments in recognition of the ecological importance of the CGSM wetlands. Monitoring of water level changes in wetland is very important resources to understand the hydrologic characteristics and the in-situ water level gauge stations are usually utilized to measure the water level. Although it can provide very good temporal resolution of water level information, it is limited to fully understand flow pattern owing to its very coarse spatial resolution. In this study, we evaluate the L-band ALOS-2 PALSAR-2 ScanSAR mode to observe the water level change over the wide wetland area using the radar interferometric technique. In order to assess the quality of the interferometric product in the aspect of spatial resolution and coherence, we also utilized ALOS-2 PALSAR-2 stripmap high-resolution mode observations.

The Effect of Domain Specificity on the Performance of Domain-Specific Pre-Trained Language Models (도메인 특수성이 도메인 특화 사전학습 언어모델의 성능에 미치는 영향)

  • Han, Minah;Kim, Younha;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.251-273
    • /
    • 2022
  • Recently, research on applying text analysis to deep learning has steadily continued. In particular, researches have been actively conducted to understand the meaning of words and perform tasks such as summarization and sentiment classification through a pre-trained language model that learns large datasets. However, existing pre-trained language models show limitations in that they do not understand specific domains well. Therefore, in recent years, the flow of research has shifted toward creating a language model specialized for a particular domain. Domain-specific pre-trained language models allow the model to understand the knowledge of a particular domain better and reveal performance improvements on various tasks in the field. However, domain-specific further pre-training is expensive to acquire corpus data of the target domain. Furthermore, many cases have reported that performance improvement after further pre-training is insignificant in some domains. As such, it is difficult to decide to develop a domain-specific pre-trained language model, while it is not clear whether the performance will be improved dramatically. In this paper, we present a way to proactively check the expected performance improvement by further pre-training in a domain before actually performing further pre-training. Specifically, after selecting three domains, we measured the increase in classification accuracy through further pre-training in each domain. We also developed and presented new indicators to estimate the specificity of the domain based on the normalized frequency of the keywords used in each domain. Finally, we conducted classification using a pre-trained language model and a domain-specific pre-trained language model of three domains. As a result, we confirmed that the higher the domain specificity index, the higher the performance improvement through further pre-training.

Management Planning of Wind Corridor based on Mountain for Improving Urban Climate Environment - A Case Study of the Nakdong Jeongmaek - (도시환경개선을 위한 산림 기반 바람길 관리 계획 - 낙동정맥을 사례로 -)

  • Uk-Je SUNG;Jeong-Min SON;Jeong-Hee EUM;Jin-Kyu MIN
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.26 no.1
    • /
    • pp.21-40
    • /
    • 2023
  • This study analyzed the cold air characteristics of the Nakdong Jeongmaek, which is advantageous for the formation of cold air that can flow into the city, in order to suggest the wind ventilation corridor plans, which have recently been increasing interest as a way to improve the urban thermal environment. In addition, based on the watershed analysis, specific cold-air watershed areas were established and management plans were suggested to expand the cold air function of the Nakdong Jeongmaek. As a result of the analysis of cold air in the Nakdong Jeongaek, cold air was strongly generated in the northern forest of the Jeongamek, and flowed into nearby cities along the valley topography. On average, the speed of cold air was high in cities located to the east of the Jeongmaek, while the height of cold air layer was high in cities located to the west. By synthesizing these cold air characteristics and watershed analysis results, the cold-air watershed area was classified into 8 zones, And the plans were proposed to preserve and strengthen the temperature reduction of the Jeongmaek by designating the zones as 'Conservation area of Cold-air', 'Management area of Cold-air', and 'Intensive management area of Cold-air'. In addition, in order to verify the temperature reduction of cold air, the effect of night temperature reduction effect was compared with the cold air analysis using weather observation data. As a result, the temperature reduction of cold air was confirmed because the night temperature reduction was large at the observation station with strong cold air characteristics. This study is expected to be used as basic data in establishing a systematic preservation and management plan to expand the cold air function of the Nakdong Jeongmaek.

Derivation of Digital Music's Ranking Change Through Time Series Clustering (시계열 군집분석을 통한 디지털 음원의 순위 변화 패턴 분류)

  • Yoo, In-Jin;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.171-191
    • /
    • 2020
  • This study focused on digital music, which is the most valuable cultural asset in the modern society and occupies a particularly important position in the flow of the Korean Wave. Digital music was collected based on the "Gaon Chart," a well-established music chart in Korea. Through this, the changes in the ranking of the music that entered the chart for 73 weeks were collected. Afterwards, patterns with similar characteristics were derived through time series cluster analysis. Then, a descriptive analysis was performed on the notable features of each pattern. The research process suggested by this study is as follows. First, in the data collection process, time series data was collected to check the ranking change of digital music. Subsequently, in the data processing stage, the collected data was matched with the rankings over time, and the music title and artist name were processed. Each analysis is then sequentially performed in two stages consisting of exploratory analysis and explanatory analysis. First, the data collection period was limited to the period before 'the music bulk buying phenomenon', a reliability issue related to music ranking in Korea. Specifically, it is 73 weeks starting from December 31, 2017 to January 06, 2018 as the first week, and from May 19, 2019 to May 25, 2019. And the analysis targets were limited to digital music released in Korea. In particular, digital music was collected based on the "Gaon Chart", a well-known music chart in Korea. Unlike private music charts that are being serviced in Korea, Gaon Charts are charts approved by government agencies and have basic reliability. Therefore, it can be considered that it has more public confidence than the ranking information provided by other services. The contents of the collected data are as follows. Data on the period and ranking, the name of the music, the name of the artist, the name of the album, the Gaon index, the production company, and the distribution company were collected for the music that entered the top 100 on the music chart within the collection period. Through data collection, 7,300 music, which were included in the top 100 on the music chart, were identified for a total of 73 weeks. On the other hand, in the case of digital music, since the cases included in the music chart for more than two weeks are frequent, the duplication of music is removed through the pre-processing process. For duplicate music, the number and location of the duplicated music were checked through the duplicate check function, and then deleted to form data for analysis. Through this, a list of 742 unique music for analysis among the 7,300-music data in advance was secured. A total of 742 songs were secured through previous data collection and pre-processing. In addition, a total of 16 patterns were derived through time series cluster analysis on the ranking change. Based on the patterns derived after that, two representative patterns were identified: 'Steady Seller' and 'One-Hit Wonder'. Furthermore, the two patterns were subdivided into five patterns in consideration of the survival period of the music and the music ranking. The important characteristics of each pattern are as follows. First, the artist's superstar effect and bandwagon effect were strong in the one-hit wonder-type pattern. Therefore, when consumers choose a digital music, they are strongly influenced by the superstar effect and the bandwagon effect. Second, through the Steady Seller pattern, we confirmed the music that have been chosen by consumers for a very long time. In addition, we checked the patterns of the most selected music through consumer needs. Contrary to popular belief, the steady seller: mid-term pattern, not the one-hit wonder pattern, received the most choices from consumers. Particularly noteworthy is that the 'Climbing the Chart' phenomenon, which is contrary to the existing pattern, was confirmed through the steady-seller pattern. This study focuses on the change in the ranking of music over time, a field that has been relatively alienated centering on digital music. In addition, a new approach to music research was attempted by subdividing the pattern of ranking change rather than predicting the success and ranking of music.

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.