• Title/Summary/Keyword: mathematical errors

Search Result 449, Processing Time 0.031 seconds

An Improved Structural Reliability Analysis using Moving Least Squares Approximation (이동최소제곱근사법을 이용한 개선된 구조 신뢰성 해석)

  • Kang, Soo-Chang;Koh, Hyun-Moo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.6A
    • /
    • pp.835-842
    • /
    • 2008
  • The response surface method (RSM) is widely adopted for the structural reliability analysis because of its numerical efficiency. However, the RSM is still time consuming for large-scale applications and sometimes shows large errors in the calculation of sensitivity of reliability index with respect to random variables. Therefore, this study proposes a new RSM in which moving least squares (MLS) approximation is applied. Least squares approximation generally used in the common RSM gives equal weight to the coefficients of the response surface function (RSF). On the other hand, The MLS approximation gives higher weight to the experimental points closer to the design point, which yields the RSF more similar to the limit state at the design point. In the procedure of the proposed method, a linear RSF is constructed initially and then a quadratic RSF is formed using the axial experimental points selected from the reduced region where the design point is likely to exist. The RSF is updated successively by adding one more experimental point to the previously sampled experimental points. In order to demonstrate the effectiveness of the proposed method, mathematical problems and ten-bar truss are considered as numerical examples. As a result, the proposed method shows better accuracy and computational efficiency than the common RSM.

Design of Digital Phase-locked Loop based on Two-layer Frobenius norm Finite Impulse Response Filter (2계층 Frobenius norm 유한 임펄스 응답 필터 기반 디지털 위상 고정 루프 설계)

  • Sin Kim;Sung Shin;Sung-Hyun You;Hyun-Duck Choi
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.31-38
    • /
    • 2024
  • The digital phase-locked loop(DPLL) is one of the circuits composed of a digital detector, digital loop filter, voltage-controlled oscillator, and divider as a fundamental circuit, widely used in many fields such as electrical and circuit fields. A state estimator using various mathematical algorithms is used to improve the performance of a digital phase-locked loop. Traditional state estimators have utilized Kalman filters of infinite impulse response state estimators, and digital phase-locked loops based on infinite impulse response state estimators can cause rapid performance degradation in unexpected situations such as inaccuracies in initial values, model errors, and various disturbances. In this paper, we propose a two-layer Frobenius norm-based finite impulse state estimator to design a new digital phase-locked loop. The proposed state estimator uses the estimated state of the first layer to estimate the state of the first layer with the accumulated measurement value. To verify the robust performance of the new finite impulse response state estimator-based digital phase locked-loop, simulations were performed by comparing it with the infinite impulse response state estimator in situations where noise covariance information was inaccurate.

A Study of the Representation in the Elementary Mathematical Problem-Solving Process (초등 수학 문제해결 과정에 사용되는 표현 방법에 대한 연구)

  • Kim, Yu-Jung;Paik, Seok-Yoon
    • Journal of Elementary Mathematics Education in Korea
    • /
    • v.9 no.2
    • /
    • pp.85-110
    • /
    • 2005
  • The purpose of this study is to examine the characteristics of visual representation used in problem solving process and examine the representation types the students used to successfully solve the problem and focus on systematizing the visual representation method using the condition students suggest in the problems. To achieve the goal of this study, following questions have been raised. (1) what characteristic does the representation the elementary school students used in the process of solving a math problem possess? (2) what types of representation did students use in order to successfully solve elementary math problem? 240 4th graders attending J Elementary School located in Seoul participated in this study. Qualitative methodology was used for data analysis, and the analysis suggested representation method the students use in problem solving process and then suggested the representation that can successfully solve five different problems. The results of the study as follow. First, the students are not familiar with representing with various methods in the problem solving process. Students tend to solve the problem using equations rather than drawing a diagram when they can not find a word that gives a hint to draw a diagram. The method students used to restate the problem was mostly rewriting the problem, and they could not utilize a table that is essential in solving the problem. Thus, various errors were found. Students did not simplify the complicated problem to find the pattern to solve the problem. Second, the image and strategy created as the problem was read and the affected greatly in solving the problem. The first image created as the problem was read made students to draw different diagram and make them choose different strategies. The study showed the importance of first image by most of the students who do not pass the trial and error step and use the strategy they chose first. Third, the students who successfully solved the problems do not solely depend on the equation but put them in the form which information are decoded. They do not write difficult equation that they can not solve, but put them into a simplified equation that know to solve the problem. On fraction problems, they draw a diagram to solve the problem without calculation, Fourth, the students who. successfully solved the problem drew clear diagram that can be understood with intuition. By representing visually, unnecessary information were omitted and used simple image were drawn using symbol or lines, and to clarify the relationship between the information, numeric explanation was added. In addition, they restricted use of complicated motion line and dividing line, proper noun in the word problems were not changed into abbreviation or symbols to clearly restate the problem. Adding additional information was useful source in solving the problem.

  • PDF

Analysis on the Positional Accuracy of the Non-orthogonal Two-pair kV Imaging Systems for Real-time Tumor Tracking Using XCAT (XCAT를 이용한 실시간 종양 위치 추적을 위한 비직교 스테레오 엑스선 영상시스템에서의 위치 추정 정확도 분석에 관한 연구)

  • Jeong, Hanseong;Kim, Youngju;Oh, Ohsung;Lee, Seho;Jeon, Hosang;Lee, Seung Wook
    • Progress in Medical Physics
    • /
    • v.26 no.3
    • /
    • pp.143-152
    • /
    • 2015
  • In this study, we aim to design the architecture of the kV imaging system for tumor tracking in the dual-head gantry system and analyze its accuracy by simulations. We established mathematical formulas and algorithms to track the tumor position with the two-pair kV imaging systems when they are in the non-orthogonal positions. The algorithms have been designed in the homogeneous coordinate framework and the position of the source and the detector coordinates are used to estimate the tumor position. 4D XCAT (4D extended cardiac-torso) software was used in the simulation to identify the influence of the angle between the two-pair kV imaging systems and the resolution of the detectors to the accuracy in the position estimation. A metal marker fiducial has been inserted in a numerical human phantom of XCAT and the kV projections were acquired at various angles and resolutions using CT projection software of the XCAT. As a result, a positional accuracy of less than about 1mm was achieved when the resolution of the detector is higher than 1.5 mm/pixel and the angle between the kV imaging systems is approximately between $90^{\circ}$ and $50^{\circ}$. When the resolution is lower than 1.5 mm/pixel, the positional errors were higher than 1mm and the error fluctuation by the angles was greater. The resolution of the detector was critical in the positional accuracy for the tumor tracking and determines the range for the acceptable angle range between the kV imaging systems. Also, we found that the positional accuracy analysis method using XCAT developed in this study is highly useful and will be a invaluable tool for further refined design of the kV imaging systems for tumor tracking systems.

A Study on the Factors Causing Analytical Errors through the Estimation of Uncertainty for Cadmium and Lead Analysis in Tomato Paste (불확도 추정을 통한 토마토 페이스트에서 카드뮴 및 납 분석의 오차 발생 요인 규명)

  • Kim, Ji-Young;Kim, Young-Jun;Yoo, Ji-Hyock;Lee, Ji-Ho;Kim, Min-Ji;Kang, Dae-Won;Im, Geon-Jae;Hong, Moo-Ki;Shin, Young-Jae;Kim, Won-Il
    • Korean Journal of Environmental Agriculture
    • /
    • v.30 no.2
    • /
    • pp.169-178
    • /
    • 2011
  • BACKGROUND: This study aimed to estimate the measurement uncertainty associated with determination of cadmium and lead from tomato paste by ICP/MS. The sources of measurement uncertainty (i.e. sample weight, final volume, standard weight, purity, molecular weight, working standard solution, calibration curve, recovery and repeatability) in associated with the analysis of cadmium and lead were evaluated. METHODS AND RESULTS: The guide to the expression of uncertainty was used for the GUM (Guide to the expression of Uncertainty in Measurement) and Draft EURACHEM/CITAC (EURACHEM: A network of organization for analytical chemistry in Europe/Co-Operation on International Traceability in Analytical Chemistry) Guide with mathematical calculation and statistical analysis. The uncertainty components were evaluated by either Type A or Type B methods and the combined standard uncertainty were calculated by statistical analysis using several factors. Expected uncertainty of cadmium and lead was $0.106{\pm}0.015$ mg/kg (k=2.09) and $0.302{\pm}0.029$ mg/kg (k=2.16), on basis of 95% confidence of Certified Reference Material (CRM) which was within certification range of $0.112{\pm}0.007$ mg/kg for cadmium (k=2.03) and $0.316{\pm}0.021$ mg/kg for lead (k=2.01), respectively. CONCLUSION(s): The most influential components in the uncertainty of heavy metals analysis were confirmed as recovery, standard calibration curve and standard solution were identified as the most influential components causing uncertainty of heavy metal analysis. Therefore, more careful consideration is required in these steps to reduce uncertainty of heavy metals analysis in tomato paste.

A comparison study of 76Se, 77Se and 78Se isotope spikes in isotope dilution method for Se (셀레늄의 동위원소 희석분석법에서 첨가 스파이크 동위원소 76Se, 77Se 및 78Se들의 비교분석)

  • Kim, Leewon;Lee, Seoyoung;Pak, Yong-Nam
    • Analytical Science and Technology
    • /
    • v.29 no.4
    • /
    • pp.170-178
    • /
    • 2016
  • Accuracy and precision of ID methods for different spike isotopes of 76Se, 77Se, and 78Se were compared for the analysis of Selenium using quadrupole ICP/MS equipped with Octopole reaction cell. From the analysis of Se inorganic standard solution, all of three spikes showed less than 1 % error and 1 % RSD for both short-term (a day) and long-term (several months) periods. They showed similar results with each other and 78Se showed was a bit better than 76Se and 77Se. However, different spikes showed different results when NIST SRM 1568a and SRM 2967 were analyzed because of the several interferences on the m/z measured and calculated. Interferences due to the generation of SeH from ORC was considered as well as As and Br in matrix. The results showed similar accuracy and precisions against SRM 1568a, which has a simple background matrix, for all three spikes and the recovery rate was about 80% with steadiness. The %RSD was a bit higher than inorganic standard (1.8 %, 8.6 %, and 6.3 % for 78Se, 76Se and 77Se, respectively) but low enough to conclude that this experiment is reliable. However, mussel tissue has a complex matrix showed inaccurate results in case of 78Se isotope spike (over 100 % RSD). 76Se and 77Se showd relatively good results of around 98.6 % and 104.2 % recovery rate. The errors were less than 5 % but the precision was a bit higher value of 15 % RSD. This clearly shows that Br interferences are so large that a simple mathematical calibration is not enough for a complex-matrixed sample. In conclusion, all three spikes show similar results when matrix is simple. However, 78Se should be avoided when large amount of Br exists in matrix. Either 76Se or 77Se would provide accurate results.

The Effect of Price Promotional Information about Brand on Consumer's Quality Perception: Conditioning on Pretrial Brand (품패개격촉소신식대소비자질량인지적영향(品牌价格促销信息对消费者质量认知的影响))

  • Lee, Min-Hoon;Lim, Hang-Seop
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.3
    • /
    • pp.17-27
    • /
    • 2009
  • Price promotion typically reduces the price for a given quantity or increases the quantity available at the same price, thereby enhancing value and creating an economic incentive to purchase. It often is used to encourage product or service trial among nonusers of products or services. Thus, it is important to understand the effects of price promotions on quality perception made by consumer who do not have prior experience with the promoted brand. However, if consumers associate a price promotion itself with inferior brand quality, the promotion may not achieve the sales increase the economic incentives otherwise might have produced. More specifically, low qualitative perception through price promotion will undercut the economic and psychological incentives and reduce the likelihood of purchase. Thus, it is important for marketers to understand how price promotional informations about a brand have impact on consumer's unfavorable quality perception of the brand. Previous literatures on the effects of price promotions on quality perception reveal inconsistent explanations. Some focused on the unfavorable effect of price promotion on consumer's perception. But others showed that price promotions didn't raise unfavorable perception on the brand. Prior researches found these inconsistent results related to the timing of the price promotion's exposure and quality evaluation relative to trial. And, whether the consumer has been experienced with the product promotions in the past or not may moderate the effects. A few studies considered differences among product categories as fundamental factors. The purpose of this research is to investigate the effect of price promotional informations on consumer's unfavorable quality perception under the different conditions. The author controlled the timing of the promotional exposure and varied past promotional patterns and information presenting patterns. Unlike previous researches, the author examined the effects of price promotions setting limit to pretrial situation by controlling potentially moderating effects of prior personal experience with the brand. This manipulations enable to resolve possible controversies in relation to this issue. And this manipulation is meaningful for the work sector. Price promotion is not only used to target existing consumers but also to encourage product or service trial among nonusers of products or services. Thus, it is important for marketers to understand how price promotional informations about a brand have impact on consumer's unfavorable quality perception of the brand. If consumers associate a price promotion itself with inferior quality about unused brand, the promotion may not achieve the sales increase the economic incentives otherwise might have produced. In addition, if the price promotion ends, the consumer that have purchased that certain brand will likely to display sharply decreased repurchasing behavior. Through a literature review, hypothesis 1 was set as follows to investigate the adjustive effect of past price promotion on quality perception made by consumers; The influence that price promotion of unused brand have on quality perception made by consumers will be adjusted by past price promotion activity of the brand. In other words, a price promotion of an unused brand that have not done a price promotion in the past will have a unfavorable effect on quality perception made by consumer. Hypothesis 2-1 was set as follows : When an unused brand undertakes price promotion for the first time, the information presenting pattern of price promotion will have an effect on the consumer's attribution for the cause of the price promotion. Hypothesis 2-2 was set as follows : The more consumer dispositionally attribute the cause of price promotion, the more unfavorable the quality perception made by consumer will be. Through test 1, the subjects were given a brief explanation of the product and the brand before they were provided with a $2{\times}2$ factorial design that has 4 patterns of price promotion (presence or absence of past price promotion * presence or absence of current price promotion) and the explanation describing the price promotion pattern of each cell. Then the perceived quality of imaginary brand WAVEX was evaluated in the scale of 7. The reason tennis racket was chosen is because the selected product group must have had almost no past price promotions to eliminate the influence of average frequency of promotion on the value of price promotional information as Raghubir and Corfman (1999) pointed out. Test 2 was also carried out on students of the same management faculty of test 1 with tennis racket as the product group. As with test 1, subjects with average familiarity for the product group and low familiarity for the brand was selected. Each subjects were assigned to one of the two cells representing two different information presenting patterns of price promotion of WAVEX (case where the reason behind price promotion was provided/case where the reason behind price promotion was not provided). Subjects looked at each promotional information before evaluating the perceived quality of the brand WAVEX in the scale of 7. The effect of price promotion for unfamiliar pretrial brand on consumer's perceived quality was proved to be moderated with the presence or absence of past price promotion. The consistency with past promotional behavior is important variable that makes unfavorable effect on brand evaluations get worse. If the price promotion for the brand has never been carried out before, price promotion activity may have more unfavorable effects on consumer's quality perception. Second, when the price promotion of unfamiliar pretrial brand was executed for the first time, presenting method of informations has impact on consumer's attribution for the cause of firm's promotion. And the unfavorable effect of quality perception is higher when the consumer does dispositional attribution comparing with situational attribution. Unlike the previous studies where the main focus was the absence or presence of favorable or unfavorable motivation from situational/dispositional attribution, the focus of this study was exaus ing the fact that a situational attribution can be inferred even if the consumer employs a dispositional attribution on the price promotional behavior, if the company provides a persuasive reason. Such approach, in academic perspectih sis a large significance in that it explained the anchoring and adjng ch approcedures by applying it to a non-mathematical problem unlike the previous studies where it wis ionaly explained by applying it to a mathematical problem. In other wordn, there is a highrspedency tmatispositionally attribute other's behaviors according to the fuedach aal attribution errors and when this is applied to the situation of price promotions, we can infer that consumers are likely tmatispositionally attribute the company's price promotion behaviors. Ha ever, even ueder these circumstances, the company can adjng the consumer's anchoring tmareduce the po wibiliute thdispositional attribution. Furthermore, unlike majority of previous researches on short/long-term effects of price promotion that only considered the effect of price promotions on consumer's purchasing behaviors, this research measured the effect on perceived quality, one of man elements that affects the purchasing behavior of consumers. These results carry useful implications for the work sector. A guideline of effectively providing promotional informations for a new brand can be suggested through the outcomes of this research. If the brand is to avoid false implications such as inferior quality while implementing a price promotion strategy, it must provide a clear and acceptable reasons behind the promotion. Especially it is more important for the company with no past price promotion to provide a clear reason. An inconsistent behavior can be the cause of consumer's distrust and anxiety. This is also one of the most important factor of risk of endless price wars. Price promotions without prior notice can buy doubt from consumers not market share.

  • PDF

Studies on the Derivation of the Instantaneous Unit Hydrograph for Small Watersheds of Main River Systems in Korea (한국주요빙계의 소유역에 대한 순간단위권 유도에 관한 연구 (I))

  • 이순혁
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.19 no.1
    • /
    • pp.4296-4311
    • /
    • 1977
  • This study was conducted to derive an Instantaneous Unit Hydrograph for the accurate and reliable unitgraph which can be used to the estimation and control of flood for the development of agricultural water resources and rational design of hydraulic structures. Eight small watersheds were selected as studying basins from Han, Geum, Nakdong, Yeongsan and Inchon River systems which may be considered as a main river systems in Korea. The area of small watersheds are within the range of 85 to 470$\textrm{km}^2$. It is to derive an accurate Instantaneous Unit Hydrograph under the condition of having a short duration of heavy rain and uniform rainfall intensity with the basic and reliable data of rainfall records, pluviographs, records of river stages and of the main river systems mentioned above. Investigation was carried out for the relations between measurable unitgraph and watershed characteristics such as watershed area, A, river length L, and centroid distance of the watershed area, Lca. Especially, this study laid emphasis on the derivation and application of Instantaneous Unit Hydrograph (IUH) by applying Nash's conceptual model and by using an electronic computer. I U H by Nash's conceptual model and I U H by flood routing which can be applied to the ungaged small watersheds were derived and compared with each other to the observed unitgraph. 1 U H for each small watersheds can be solved by using an electronic computer. The results summarized for these studies are as follows; 1. Distribution of uniform rainfall intensity appears in the analysis for the temporal rainfall pattern of selected heavy rainfall event. 2. Mean value of recession constants, Kl, is 0.931 in all watersheds observed. 3. Time to peak discharge, Tp, occurs at the position of 0.02 Tb, base length of hlrdrograph with an indication of lower value than that in larger watersheds. 4. Peak discharge, Qp, in relation to the watershed area, A, and effective rainfall, R, is found to be {{{{ { Q}_{ p} = { 0.895} over { { A}^{0.145 } } }}}} AR having high significance of correlation coefficient, 0.927, between peak discharge, Qp, and effective rainfall, R. Design chart for the peak discharge (refer to Fig. 15) with watershed area and effective rainfall was established by the author. 5. The mean slopes of main streams within the range of 1.46 meters per kilometer to 13.6 meter per kilometer. These indicate higher slopes in the small watersheds than those in larger watersheds. Lengths of main streams are within the range of 9.4 kilometer to 41.75 kilometer, which can be regarded as a short distance. It is remarkable thing that the time of flood concentration was more rapid in the small watersheds than that in the other larger watersheds. 6. Length of main stream, L, in relation to the watershed area, A, is found to be L=2.044A0.48 having a high significance of correlation coefficient, 0.968. 7. Watershed lag, Lg, in hrs in relation to the watershed area, A, and length of main stream, L, was derived as Lg=3.228 A0.904 L-1.293 with a high significance. On the other hand, It was found that watershed lag, Lg, could also be expressed as {{{{Lg=0.247 { ( { LLca} over { SQRT { S} } )}^{ 0.604} }}}} in connection with the product of main stream length and the centroid length of the basin of the watershed area, LLca which could be expressed as a measure of the shape and the size of the watershed with the slopes except watershed area, A. But the latter showed a lower correlation than that of the former in the significance test. Therefore, it can be concluded that watershed lag, Lg, is more closely related with the such watersheds characteristics as watershed area and length of main stream in the small watersheds. Empirical formula for the peak discharge per unit area, qp, ㎥/sec/$\textrm{km}^2$, was derived as qp=10-0.389-0.0424Lg with a high significance, r=0.91. This indicates that the peak discharge per unit area of the unitgraph is in inverse proportion to the watershed lag time. 8. The base length of the unitgraph, Tb, in connection with the watershed lag, Lg, was extra.essed as {{{{ { T}_{ b} =1.14+0.564( { Lg} over {24 } )}}}} which has defined with a high significance. 9. For the derivation of IUH by applying linear conceptual model, the storage constant, K, with the length of main stream, L, and slopes, S, was adopted as {{{{K=0.1197( {L } over { SQRT {S } } )}}}} with a highly significant correlation coefficient, 0.90. Gamma function argument, N, derived with such watershed characteristics as watershed area, A, river length, L, centroid distance of the basin of the watershed area, Lca, and slopes, S, was found to be N=49.2 A1.481L-2.202 Lca-1.297 S-0.112 with a high significance having the F value, 4.83, through analysis of variance. 10. According to the linear conceptual model, Formular established in relation to the time distribution, Peak discharge and time to peak discharge for instantaneous Unit Hydrograph when unit effective rainfall of unitgraph and dimension of watershed area are applied as 10mm, and $\textrm{km}^2$ respectively are as follows; Time distribution of IUH {{{{u(0, t)= { 2.78A} over {K GAMMA (N) } { e}^{-t/k } { (t.K)}^{N-1 } }}}} (㎥/sec) Peak discharge of IUH {{{{ {u(0, t) }_{max } = { 2.78A} over {K GAMMA (N) } { e}^{-(N-1) } { (N-1)}^{N-1 } }}}} (㎥/sec) Time to peak discharge of IUH tp=(N-1)K (hrs) 11. Through mathematical analysis in the recession curve of Hydrograph, It was confirmed that empirical formula of Gamma function argument, N, had connection with recession constant, Kl, peak discharge, QP, and time to peak discharge, tp, as {{{{{ K'} over { { t}_{ p} } = { 1} over {N-1 } - { ln { t} over { { t}_{p } } } over {ln { Q} over { { Q}_{p } } } }}}} where {{{{K'= { 1} over { { lnK}_{1 } } }}}} 12. Linking the two, empirical formulars for storage constant, K, and Gamma function argument, N, into closer relations with each other, derivation of unit hydrograph for the ungaged small watersheds can be established by having formulars for the time distribution and peak discharge of IUH as follows. Time distribution of IUH u(0, t)=23.2 A L-1S1/2 F(N, K, t) (㎥/sec) where {{{{F(N, K, t)= { { e}^{-t/k } { (t/K)}^{N-1 } } over { GAMMA (N) } }}}} Peak discharge of IUH) u(0, t)max=23.2 A L-1S1/2 F(N) (㎥/sec) where {{{{F(N)= { { e}^{-(N-1) } { (N-1)}^{N-1 } } over { GAMMA (N) } }}}} 13. The base length of the Time-Area Diagram for the IUH was given by {{{{C=0.778 { ( { LLca} over { SQRT { S} } )}^{0.423 } }}}} with correlation coefficient, 0.85, which has an indication of the relations to the length of main stream, L, centroid distance of the basin of the watershed area, Lca, and slopes, S. 14. Relative errors in the peak discharge of the IUH by using linear conceptual model and IUH by routing showed to be 2.5 and 16.9 percent respectively to the peak of observed unitgraph. Therefore, it confirmed that the accuracy of IUH using linear conceptual model was approaching more closely to the observed unitgraph than that of the flood routing in the small watersheds.

  • PDF

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.