• Title/Summary/Keyword: Global optimization

Search Result 1,110, Processing Time 0.039 seconds

Re-Analysis of Clark Model Based on Drainage Structure of Basin (배수구조를 기반으로 한 Clark 모형의 재해석)

  • Park, Sang Hyun;Kim, Joo Cheol;Jeong, Dong Kug;Jung, Kwan Sue
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.33 no.6
    • /
    • pp.2255-2265
    • /
    • 2013
  • This study presents the width function-based Clark model. To this end, rescaled width function with distinction between hillslope and channel velocity is used as time-area curve and then it is routed through linear storage within the framework of not finite difference scheme used in original Clark model but analytical expression of linear storage routing. There are three parameters focused in this study: storage coefficient, hillslope velocity and channel velocity. SCE-UA, one of the popular global optimization methods, is applied to estimate them. The shapes of resulting IUHs from this study are evaluated in terms of the three statistical moments of hydrologic response functions: mean, variance and the third moment about the center of IUH. The correlation coefficients to the three statistical moments simulated in this study against these of observed hydrographs were estimated at 0.995 for the mean, 0.993 for the variance and 0.983 for the third moment about the center of IUH. The shape of resulting IUHs from this study give rise to satisfactory simulation results in terms of the mean and variance. But the third moment about the center of IUH tend to be overestimated. Clark model proposed in this study is superior to the one only taking into account mean and variance of IUH with respect to skewness, peak discharge and peak time of runoff hydrograph. From this result it is confirmed that the method suggested in this study is useful tool to reflect the heterogeneity of drainage path and hydrodynamic parameters. The variation of statistical moments of IUH are mainly influenced by storage coefficient and in turn the effect of channel velocity is greater than the one of hillslope velocity. Therefore storage coefficient and channel velocity are the crucial factors in shaping the form of IUH and should be considered carefully to apply Clark model proposed in this study.

Direct Reconstruction of Displaced Subdivision Mesh from Unorganized 3D Points (연결정보가 없는 3차원 점으로부터 차이분할메쉬 직접 복원)

  • Jung, Won-Ki;Kim, Chang-Heon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.6
    • /
    • pp.307-317
    • /
    • 2002
  • In this paper we propose a new mesh reconstruction scheme that produces a displaced subdivision surface directly from unorganized points. The displaced subdivision surface is a new mesh representation that defines a detailed mesh with a displacement map over a smooth domain surface, but original displaced subdivision surface algorithm needs an explicit polygonal mesh since it is not a mesh reconstruction algorithm but a mesh conversion (remeshing) algorithm. The main idea of our approach is that we sample surface detail from unorganized points without any topological information. For this, we predict a virtual triangular face from unorganized points for each sampling ray from a parameteric domain surface. Direct displaced subdivision surface reconstruction from unorganized points has much importance since the output of this algorithm has several important properties: It has compact mesh representation since most vertices can be represented by only a scalar value. Underlying structure of it is piecewise regular so it ran be easily transformed into a multiresolution mesh. Smoothness after mesh deformation is automatically preserved. We avoid time-consuming global energy optimization by employing the input data dependant mesh smoothing, so we can get a good quality displaced subdivision surface quickly.

The Effect of Supply Chain Dynamic Capabilities, Open Innovation and Supply Uncertainty on Supply Chain Performance (공급사슬 동적역량, 개방형 혁신, 공급 불확실성이 공급사슬 성과에 미치는 영향)

  • Lee, Sang-Yeol
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.4
    • /
    • pp.481-491
    • /
    • 2018
  • As the global business environment is dynamic, uncertain, and complex, supply chain management determines the performance of the supply chain in terms of the utilization of resources and capabilities of companies involved in the supply chain. Companies pursuing open innovation gain greater access to the external environment and accumulate knowledge flows and learning experiences, and may generate better business performance from dynamic capabilities. This study analyzed the effects of supply chain dynamic capabilities, open innovation, and supply uncertainty on supply chain performance. Through questionnaires on 178 companies listed on KOSDAQ, empirical results are as follows: First, integration and reactivity capabilities among supply chain dynamic capabilities have a positive effect on supply chain performance. Second, the moderating effect of open innovation showed a negative correlation in the case of information exchange, and a positive correlation in the cases of integration, cooperation and reactivity. Third, two of the 3-way interaction terms, "information exchange*open innovation*supply uncertainty" and "integration*open innovation*supply uncertainty" were statistically significant. The implications of this study are as follows: First, as the supply chain needs to achieve optimization of the whole process between supply chain components rather than individual companies, dynamic capabilities play an important role in improving performance. Second, for KOSDAQ companies featuring limited capital resources, open innovation that integrates external knowledge is valuable. In order to increase synergistic effects, it is necessary to develop dynamic capabilities accordingly. Third, since resources are constrained, managers must determine the type or level of capabilities and open innovation in accordance with supply uncertainty. Since this study has limitations in analyzing survey data, it is necessary to collect secondary data or longitudinal data. It is also necessary to further analyze the internal and external factors that have a significant impact on supply chain performance.

Properties of the Silkworm (Bombyx mori) Dongchunghacho, a Newly Developed Korean Medicinal Insect-borne Mushroom: Mass-production and Pharmacological Actions (한국에서 개발된 곤충유래 약용버섯인 누에동충하초의 생산기술개발 및 약리학적 특성)

  • Lee, Sang Mong;Kim, Yong Gyun;Park, Hyean Cheal;Kim, Keun Ki;Son, Hong Joo;Hong, Chang Oh;Park, Nam Sook
    • Journal of Life Science
    • /
    • v.27 no.2
    • /
    • pp.247-266
    • /
    • 2017
  • Cordyceps is a traditional Chinese medicinal herb well-known in China, Korea and Japan since B.C. 2,000. The original entomopathogenic fungus, Cordyceps sinensis belonging to the genus Cordyceps could not be found inside Korean peninsula due to the absence of the host insect for the corresponding entomogenous fungus. The development of artificial production methods of Korean type Cordyceps using the silkworm Bombyx mori as in vivo culture medium for the the entomopathogenic fungus Paecilomyces tenuipes is the first, and wonderful occasion in the research history of insect industry of this global world. The aim of this article is to review the historical research background, mass-production methods, and pharmacological effects of the silkworm-dongchunghacho (Paecilomyces tenuipes) which is a newly developed Korean medicinal insect-borne mushroom, and another non-insect-borne medicinal mushroom (Cordyceps militaris and Cordyceps pruinosa). Their biological actions include anti-tumor, immunostimulating, anti-fatigue, anti-stress, anti-oxidant, anti-aging, anti-diabetic, anti-inflammatory, anti-thrombosis, hypolipidaemic and insecticidal effects. The bioactive principles are protein-bound polysaccharides (hexose, hexosamin), cordycepin, D-manitol, acidic polysaccharide etc. Protein-bound polysaccharides and n-butanol fractions were demonstrated to show a significant anti-tumor activities but did not show a cytotoxicities. D-mannitol exhibited a significant prolongation of the life span in tumor bearing mice. Ergosterol did not show an efficient anti-tumor activity, but showed a significant phagocytosis enhancing activity. Anti-tumor activity of silkworm-dongchunghacho might be attributed to immuno-stimulating activities rather than cytotoxic effects [164]. Also this review comprises the breeding of Dongchunghacho varieties, optimization of culture conditions, improvement of learning and memory by Dongchunghacho, application of them as foods and chemical constituents.

Assessment of the Angstrom-Prescott Coefficients for Estimation of Solar Radiation in Korea (국내 일사량 추정을 위한 Angstrom-Prescott계수의 평가)

  • Hyun, Shinwoo;Kim, Kwang Soo
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.18 no.4
    • /
    • pp.221-232
    • /
    • 2016
  • Models to estimate solar radiation have been used because solar radiation is measured at a smaller number of weather stations than other variables including temperature and rainfall. For example, solar radiation has been estimated using the Angstrom-Prescott (AP) model that depends on two coefficients obtained empirically at a specific site ($AP_{Choi}$) or for a climate zone ($AP_{Frere}$). The objective of this study was to identify the coefficients of the AP model for reliable estimation of solar radiation under a wide range of spatial and temporal conditions. A global optimization was performed for a range of AP coefficients to identify the values of $AP_{max}$ that resulted in the greatest degree of agreement at each of 20 sites for a given month during 30 years. The degree of agreement was assessed using the value of Concordance Correlation Coefficient (CCC). When $AP_{Frere}$ was used to estimate solar radiation, the values of CCC were relatively high for conditions under which crop growth simulation would be performed, e.g., at rural sites during summer. The statistics for $AP_{Frere}$ were greater than those for $AP_{Choi}$ although $AP_{Frere}$ had the smaller statistics than $AP_{max}$ did. The variation of CCC values was small over a wide range of AP coefficients when those statistics were summarized by site. $AP_{Frere}$ was included in each range of AP coefficients that resulted in reasonable accuracy of solar radiation estimates by site, year, and month. These results suggested that $AP_{Frere}$ would be useful to provide estimates of solar radiation as an input to crop models in Korea. Further studies would be merited to examine feasibility of using $AP_{Frere}$ to obtain gridded estimates of solar radiation at a high spatial resolution under a complex terrain in Korea.

Object Tracking Based on Exactly Reweighted Online Total-Error-Rate Minimization (정확히 재가중되는 온라인 전체 에러율 최소화 기반의 객체 추적)

  • JANG, Se-In;PARK, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.53-65
    • /
    • 2019
  • Object tracking is one of important steps to achieve video-based surveillance systems. Object tracking is considered as an essential task similar to object detection and recognition. In order to perform object tracking, various machine learning methods (e.g., least-squares, perceptron and support vector machine) can be applied for different designs of tracking systems. In general, generative methods (e.g., principal component analysis) were utilized due to its simplicity and effectiveness. However, the generative methods were only focused on modeling the target object. Due to this limitation, discriminative methods (e.g., binary classification) were adopted to distinguish the target object and the background. Among the machine learning methods for binary classification, total error rate minimization can be used as one of successful machine learning methods for binary classification. The total error rate minimization can achieve a global minimum due to a quadratic approximation to a step function while other methods (e.g., support vector machine) seek local minima using nonlinear functions (e.g., hinge loss function). Due to this quadratic approximation, the total error rate minimization could obtain appropriate properties in solving optimization problems for binary classification. However, this total error rate minimization was based on a batch mode setting. The batch mode setting can be limited to several applications under offline learning. Due to limited computing resources, offline learning could not handle large scale data sets. Compared to offline learning, online learning can update its solution without storing all training samples in learning process. Due to increment of large scale data sets, online learning becomes one of essential properties for various applications. Since object tracking needs to handle data samples in real time, online learning based total error rate minimization methods are necessary to efficiently address object tracking problems. Due to the need of the online learning, an online learning based total error rate minimization method was developed. However, an approximately reweighted technique was developed. Although the approximation technique is utilized, this online version of the total error rate minimization could achieve good performances in biometric applications. However, this method is assumed that the total error rate minimization can be asymptotically achieved when only the number of training samples is infinite. Although there is the assumption to achieve the total error rate minimization, the approximation issue can continuously accumulate learning errors according to increment of training samples. Due to this reason, the approximated online learning solution can then lead a wrong solution. The wrong solution can make significant errors when it is applied to surveillance systems. In this paper, we propose an exactly reweighted technique to recursively update the solution of the total error rate minimization in online learning manner. Compared to the approximately reweighted online total error rate minimization, an exactly reweighted online total error rate minimization is achieved. The proposed exact online learning method based on the total error rate minimization is then applied to object tracking problems. In our object tracking system, particle filtering is adopted. In particle filtering, our observation model is consisted of both generative and discriminative methods to leverage the advantages between generative and discriminative properties. In our experiments, our proposed object tracking system achieves promising performances on 8 public video sequences over competing object tracking systems. The paired t-test is also reported to evaluate its quality of the results. Our proposed online learning method can be extended under the deep learning architecture which can cover the shallow and deep networks. Moreover, online learning methods, that need the exact reweighting process, can use our proposed reweighting technique. In addition to object tracking, the proposed online learning method can be easily applied to object detection and recognition. Therefore, our proposed methods can contribute to online learning community and object tracking, detection and recognition communities.

Performance Optimization of Numerical Ocean Modeling on Cloud Systems (클라우드 시스템에서 해양수치모델 성능 최적화)

  • JUNG, KWANGWOOG;CHO, YANG-KI;TAK, YONG-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.27 no.3
    • /
    • pp.127-143
    • /
    • 2022
  • Recently, many attempts to run numerical ocean models in cloud computing environments have been tried actively. A cloud computing environment can be an effective means to implement numerical ocean models requiring a large-scale resource or quickly preparing modeling environment for global or large-scale grids. Many commercial and private cloud computing systems provide technologies such as virtualization, high-performance CPUs and instances, ether-net based high-performance-networking, and remote direct memory access for High Performance Computing (HPC). These new features facilitate ocean modeling experimentation on commercial cloud computing systems. Many scientists and engineers expect cloud computing to become mainstream in the near future. Analysis of the performance and features of commercial cloud services for numerical modeling is essential in order to select appropriate systems as this can help to minimize execution time and the amount of resources utilized. The effect of cache memory is large in the processing structure of the ocean numerical model, which processes input/output of data in a multidimensional array structure, and the speed of the network is important due to the communication characteristics through which a large amount of data moves. In this study, the performance of the Regional Ocean Modeling System (ROMS), the High Performance Linpack (HPL) benchmarking software package, and STREAM, the memory benchmark were evaluated and compared on commercial cloud systems to provide information for the transition of other ocean models into cloud computing. Through analysis of actual performance data and configuration settings obtained from virtualization-based commercial clouds, we evaluated the efficiency of the computer resources for the various model grid sizes in the virtualization-based cloud systems. We found that cache hierarchy and capacity are crucial in the performance of ROMS using huge memory. The memory latency time is also important in the performance. Increasing the number of cores to reduce the running time for numerical modeling is more effective with large grid sizes than with small grid sizes. Our analysis results will be helpful as a reference for constructing the best computing system in the cloud to minimize time and cost for numerical ocean modeling.

Assessment of water supply reliability in the Geum River Basin using univariate climate response functions: a case study for changing instreamflow managements (단변량 기후반응함수를 이용한 금강수계 이수안전도 평가: 하천유지유량 관리 변화를 고려한 사례연구)

  • Kim, Daeha;Choi, Si Jung;Jang, Su Hyung;Kang, Dae Hu
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.12
    • /
    • pp.993-1003
    • /
    • 2023
  • Due to the increasing greenhouse gas emissions, the global mean temperature has risen by 1.1℃ compared to pre-industrial levels, and significant changes are expected in functioning of water supply systems. In this study, we assessed impacts of climate change and instreamflow management on water supply reliability in the Geum River basin, Korea. We proposed univariate climate response functions, where mean precipitation and potential evaporation were coupled as an explanatory variable, to assess impacts of climate stress on multiple water supply reliabilities. To this end, natural streamflows were generated in the 19 sub-basins with the conceptual GR6J model. Then, the simulated streamflows were input into the Water Evaluation And Planning (WEAP) model. The dynamic optimization by WEAP allowed us to assess water supply reliability against the 2020 water demand projections. Results showed that when minimizing the water shortage of the entire river basin under the 1991-2020 climate, water supply reliability was lowest in the Bocheongcheon among the sub-basins. In a scenario where the priority of instreamflow maintenance is adjusted to be the same as municipal and industrial water use, water supply reliability in the Bocheongcheon, Chogang, and Nonsancheon sub-basins significantly decreased. The stress tests with 325 sets of climate perturbations showed that water supply reliability in the three sub-basins considerably decreased under all the climate stresses, while the sub-basins connected to large infrastructures did not change significantly. When using the 2021-2050 climate projections with the stress test results, water supply reliability in the Geum River basin was expected to generally improve, but if the priority of instreamflow maintenance is increased, water shortage is expected to worsen in geographically isolated sub-basins. Here, we suggest that the climate response function can be established by a single explanatory variable to assess climate change impacts of many sub-basin's performance simultaneously.

An Empirical Study on the Influencing Factors for Big Data Intented Adoption: Focusing on the Strategic Value Recognition and TOE Framework (빅데이터 도입의도에 미치는 영향요인에 관한 연구: 전략적 가치인식과 TOE(Technology Organizational Environment) Framework을 중심으로)

  • Ka, Hoi-Kwang;Kim, Jin-soo
    • Asia pacific journal of information systems
    • /
    • v.24 no.4
    • /
    • pp.443-472
    • /
    • 2014
  • To survive in the global competitive environment, enterprise should be able to solve various problems and find the optimal solution effectively. The big-data is being perceived as a tool for solving enterprise problems effectively and improve competitiveness with its' various problem solving and advanced predictive capabilities. Due to its remarkable performance, the implementation of big data systems has been increased through many enterprises around the world. Currently the big-data is called the 'crude oil' of the 21st century and is expected to provide competitive superiority. The reason why the big data is in the limelight is because while the conventional IT technology has been falling behind much in its possibility level, the big data has gone beyond the technological possibility and has the advantage of being utilized to create new values such as business optimization and new business creation through analysis of big data. Since the big data has been introduced too hastily without considering the strategic value deduction and achievement obtained through the big data, however, there are difficulties in the strategic value deduction and data utilization that can be gained through big data. According to the survey result of 1,800 IT professionals from 18 countries world wide, the percentage of the corporation where the big data is being utilized well was only 28%, and many of them responded that they are having difficulties in strategic value deduction and operation through big data. The strategic value should be deducted and environment phases like corporate internal and external related regulations and systems should be considered in order to introduce big data, but these factors were not well being reflected. The cause of the failure turned out to be that the big data was introduced by way of the IT trend and surrounding environment, but it was introduced hastily in the situation where the introduction condition was not well arranged. The strategic value which can be obtained through big data should be clearly comprehended and systematic environment analysis is very important about applicability in order to introduce successful big data, but since the corporations are considering only partial achievements and technological phases that can be obtained through big data, the successful introduction is not being made. Previous study shows that most of big data researches are focused on big data concept, cases, and practical suggestions without empirical study. The purpose of this study is provide the theoretically and practically useful implementation framework and strategies of big data systems with conducting comprehensive literature review, finding influencing factors for successful big data systems implementation, and analysing empirical models. To do this, the elements which can affect the introduction intention of big data were deducted by reviewing the information system's successful factors, strategic value perception factors, considering factors for the information system introduction environment and big data related literature in order to comprehend the effect factors when the corporations introduce big data and structured questionnaire was developed. After that, the questionnaire and the statistical analysis were performed with the people in charge of the big data inside the corporations as objects. According to the statistical analysis, it was shown that the strategic value perception factor and the inside-industry environmental factors affected positively the introduction intention of big data. The theoretical, practical and political implications deducted from the study result is as follows. The frist theoretical implication is that this study has proposed theoretically effect factors which affect the introduction intention of big data by reviewing the strategic value perception and environmental factors and big data related precedent studies and proposed the variables and measurement items which were analyzed empirically and verified. This study has meaning in that it has measured the influence of each variable on the introduction intention by verifying the relationship between the independent variables and the dependent variables through structural equation model. Second, this study has defined the independent variable(strategic value perception, environment), dependent variable(introduction intention) and regulatory variable(type of business and corporate size) about big data introduction intention and has arranged theoretical base in studying big data related field empirically afterwards by developing measurement items which has obtained credibility and validity. Third, by verifying the strategic value perception factors and the significance about environmental factors proposed in the conventional precedent studies, this study will be able to give aid to the afterwards empirical study about effect factors on big data introduction. The operational implications are as follows. First, this study has arranged the empirical study base about big data field by investigating the cause and effect relationship about the influence of the strategic value perception factor and environmental factor on the introduction intention and proposing the measurement items which has obtained the justice, credibility and validity etc. Second, this study has proposed the study result that the strategic value perception factor affects positively the big data introduction intention and it has meaning in that the importance of the strategic value perception has been presented. Third, the study has proposed that the corporation which introduces big data should consider the big data introduction through precise analysis about industry's internal environment. Fourth, this study has proposed the point that the size and type of business of the corresponding corporation should be considered in introducing the big data by presenting the difference of the effect factors of big data introduction depending on the size and type of business of the corporation. The political implications are as follows. First, variety of utilization of big data is needed. The strategic value that big data has can be accessed in various ways in the product, service field, productivity field, decision making field etc and can be utilized in all the business fields based on that, but the parts that main domestic corporations are considering are limited to some parts of the products and service fields. Accordingly, in introducing big data, reviewing the phase about utilization in detail and design the big data system in a form which can maximize the utilization rate will be necessary. Second, the study is proposing the burden of the cost of the system introduction, difficulty in utilization in the system and lack of credibility in the supply corporations etc in the big data introduction phase by corporations. Since the world IT corporations are predominating the big data market, the big data introduction of domestic corporations can not but to be dependent on the foreign corporations. When considering that fact, that our country does not have global IT corporations even though it is world powerful IT country, the big data can be thought to be the chance to rear world level corporations. Accordingly, the government shall need to rear star corporations through active political support. Third, the corporations' internal and external professional manpower for the big data introduction and operation lacks. Big data is a system where how valuable data can be deducted utilizing data is more important than the system construction itself. For this, talent who are equipped with academic knowledge and experience in various fields like IT, statistics, strategy and management etc and manpower training should be implemented through systematic education for these talents. This study has arranged theoretical base for empirical studies about big data related fields by comprehending the main variables which affect the big data introduction intention and verifying them and is expected to be able to propose useful guidelines for the corporations and policy developers who are considering big data implementationby analyzing empirically that theoretical base.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.