• Title/Summary/Keyword: minimal process

Search Result 571, Processing Time 0.026 seconds

Computer Assisted EPID Analysis of Breast Intrafractional and Interfractional Positioning Error (유방암 방사선치료에 있어 치료도중 및 분할치료 간 위치오차에 대한 전자포탈영상의 컴퓨터를 이용한 자동 분석)

  • Sohn Jason W.;Mansur David B.;Monroe James I.;Drzymala Robert E.;Jin Ho-Sang;Suh Tae-Suk;Dempsey James F.;Klein Eric E.
    • Progress in Medical Physics
    • /
    • v.17 no.1
    • /
    • pp.24-31
    • /
    • 2006
  • Automated analysis software was developed to measure the magnitude of the intrafractional and interfractional errors during breast radiation treatments. Error analysis results are important for determining suitable planning target volumes (PTV) prior to Implementing breast-conserving 3-D conformal radiation treatment (CRT). The electrical portal imaging device (EPID) used for this study was a Portal Vision LC250 liquid-filled ionization detector (fast frame-averaging mode, 1.4 frames per second, 256X256 pixels). Twelve patients were imaged for a minimum of 7 treatment days. During each treatment day, an average of 8 to 9 images per field were acquired (dose rate of 400 MU/minute). We developed automated image analysis software to quantitatively analyze 2,931 images (encompassing 720 measurements). Standard deviations ($\sigma$) of intrafractional (breathing motion) and intefractional (setup uncertainty) errors were calculated. The PTV margin to include the clinical target volume (CTV) with 95% confidence level was calculated as $2\;(1.96\;{\sigma})$. To compensate for intra-fractional error (mainly due to breathing motion) the required PTV margin ranged from 2 mm to 4 mm. However, PTV margins compensating for intefractional error ranged from 7 mm to 31 mm. The total average error observed for 12 patients was 17 mm. The intefractional setup error ranged from 2 to 15 times larger than intrafractional errors associated with breathing motion. Prior to 3-D conformal radiation treatment or IMRT breast treatment, the magnitude of setup errors must be measured and properly incorporated into the PTV. To reduce large PTVs for breast IMRT or 3-D CRT, an image-guided system would be extremely valuable, if not required. EPID systems should incorporate automated analysis software as described in this report to process and take advantage of the large numbers of EPID images available for error analysis which will help Individual clinics arrive at an appropriate PTV for their practice. Such systems can also provide valuable patient monitoring information with minimal effort.

  • PDF

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Automated patient set-up using intensity based image registration in proton therapy (양성자 치료 시 Intensity 기반의 영상 정합을 이용한 환자 자동화 Set up 적용 방법)

  • Jang, Hoon;Kim, Ho Sik;Choe, Seung Oh;Kim, Eun Suk;Jeong, Jong Hyi;Ahn, Sang Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.30 no.1_2
    • /
    • pp.97-105
    • /
    • 2018
  • Purpose : Proton Therapy using Bragg-peak, because it has distinct characteristics in providing maximum dosage for tumor and minimal dosage for normal tissue, a medical imaging system that can quantify changes in patient position or treatment area is of paramount importance to the treatment of protons. The purpose of this research is to evaluate the usefulness of the algorithm by comparing the image matching through the set-up and in-house code through the existing dips program by producing a Matlab-based in-house registration code to determine the error value between dips and DRR to evaluate the accuracy of the existing treatment. Materials and Methods : Thirteen patients with brain tumors and head and neck cancer who received proton therapy were included in this study and used the DIPS Program System (Version 2.4.3, IBA, Belgium) for image comparison and the Eclipse Proton Planning System (Version 13.7, Varian, USA) for patient treatment planning. For Validation of the Registration method, a test image was artificially rotated and moved to match the existing image, and the initial set up image of DIPS program of existing set up process was image-matched with plan DRR, and the error value was obtained, and the usefulness of the algorithm was evaluated. Results : When the test image was moved 0.5, 1, and 10 cm in the left and right directions, the average error was 0.018 cm. When the test image was rotated counterclockwise by 1 and $10^{\circ}$, the error was $0.0011^{\circ}$. When the initial images of four patients were imaged, the mean error was 0.056, 0.044, and 0.053 cm in the order of x, y, and z, and 0.190 and $0.206^{\circ}$ in the order of rotation and pitch. When the final images of 13 patients were imaged, the mean differences were 0.062, 0.085, and 0.074 cm in the order of x, y, and z, and 0.120 cm as the vector value. Rotation and pitch were 0.171 and $0.174^{\circ}$, respectively. Conclusion : The Matlab-based In-house Registration code produced through this study showed accurate Image matching based on Intensity as well as the simple image as well as anatomical structure. Also, the Set-up error through the DIPS program of the existing treatment method showed a very slight difference, confirming the accuracy of the proton therapy. Future development of additional programs and future Intensity-based Matlab In-house code research will be necessary for future clinical applications.

  • PDF

Examining the Relationship Among Restaurant Brand Relationship Quality, Attribution, and Emotional Response After Service Failure Experience (서비스 실패 경험 후 레스토랑 브랜드 품질, 귀인 및 감정반응 관계분석)

  • Jang, Gi-Hwa;Song, Soo-Ik;Oh, Sung-Cheon
    • Journal of the Korean Applied Science and Technology
    • /
    • v.35 no.4
    • /
    • pp.1120-1133
    • /
    • 2018
  • The purpose of this study is to validate the failure attribution factors affecting emotional changes after a failed service by local restaurant users, and the relapse effects of the perceived failure of a customer's brand relationship. In this study, the implications of this study can be divided into the null theory and the homogenous theory, in which the study of the relationship between individual belief that influences the null theory and the post-gender emotional response is minimal. The independence of the crash response (angerous VS compassion) has been equally validated as building a belief-gathering-emotion three-step model. First, emotional BRQ (intimate and love) has a reduction effect on controllable geeks, and behavioral BRQ (relative existence) has an extended effect on controllable geeks. From a management perspective, restaurant managers should be less aware of the repeatability of a customer's service failure and call for customer sympathy. Integratedly, restaurant managers must control the customer's perception of service failure and restore the impact of the customer's BRQ on emotional reactions. A variety of service recovery measures should be established and the cerumen should be controlled. In addition, since BRQs have different effects on anger and sympathy (extended VS), different service failure recovery plans should be presented depending on the characteristics of the customer BRQ. For example, measures such as monetary compensation or fair dealing, emotional distribution to close and loving customers, and persuasion of reciprocal benefits to interdependent customers should be developed according to circumstances. This study explored the effectiveness of the geeks after a service failure and has limitations that do not take into account the various regulatory factors in the BRQ-return-Empression process. Thus, in further studies, the effects of adjusting service failure strength should be considered and a more complete model should be built.

Application of Environmental Friendly Bio-adsorbent based on a Plant Root for Copper Recovery Compared to the Synthetic Resin (구리 회수를 위한 식물뿌리 기반 친환경 바이오 흡착제의 적용 - 합성수지와의 비교)

  • Bawkar, Shilpa K.;Jha, Manis K.;Choubey, Pankaj K.;Parween, Rukshana;Panda, Rekha;Singh, Pramod K.;Lee, Jae-chun
    • Resources Recycling
    • /
    • v.31 no.4
    • /
    • pp.56-65
    • /
    • 2022
  • Copper is one of the non-ferrous metals used in the electrical/electronic manufacturing industries due to its superior properties particularly the high conductivity and less resistivity. The effluent generated from the surface finishing process of these industries contains higher copper content which gets discharged in to water bodies directly or indirectly. This causes severe environmental pollution and also results in loss of an important valuable metal. To overcome this issue, continuous R & D activities are going on across the globe in adsorption area with the purpose of finding an efficient, low cost and ecofriendly adsorbent. In view of the above, present investigation was made to compare the performance of a plant root (Datura root powder) as a bio-adsorbent to that of the synthetic one (Tulsion T-42) for copper adsorption from such effluent. Experiments were carried out in batch studies to optimize parameters such as adsorbent dose, contact time, pH, feed concentration, etc. Results of the batch experiments indicate that 0.2 g of Datura root powder and 0.1 g of Tulsion T-42 showed 95% copper adsorption from an initial feed/solution of 100 ppm Cu at pH 4 in contact time of 15 and 30 min, respectively. Adsorption data for both the adsorbents were fitted well to the Freundlich isotherm. Experimental results were also validated with the kinetic model, which showed that the adsorption of copper followed pseudo-second order rate expression for the both adsorbents. Overall result demonstrates that the bio-adsorbent tested has a potential applicability for metal recovery from the waste solutions/effluents of metal finishing units. In view of the requirements of commercial viability and minimal environmental damage there from, Datura root powder being an effective material for metal uptake, may prove to be a feasible adsorbent for copper recovery after the necessary scale-up studies.

A Review of the Influence of Sulfate and Sulfide on the Deep Geological Disposal of High-level Radioactive Waste (고준위방사성폐기물 심층처분에 미치는 황산염과 황화물의 영향에 대한 고찰)

  • Jin-Seok Kim;Seung Yeop Lee;Sang-Ho Lee;Jang-Soon Kwon
    • Economic and Environmental Geology
    • /
    • v.56 no.4
    • /
    • pp.421-433
    • /
    • 2023
  • The final disposal of spent nuclear fuel(SNF) from nuclear power plants takes place in a deep geological repository. The metal canister encasing the SNF is made of cast iron and copper, and is engineered to effectively isolate radioactive isotopes for a long period of time. The SNF is further shielded by a multi-barrier disposal system comprising both engineering and natural barriers. The deep disposal environment gradually changes to an anaerobic reducing environment. In this environment, sulfide is one of the most probable substances to induce corrosion of copper canister. Stress-corrosion cracking(SCC) triggered by sulfide can carry substantial implications for the integrity of the copper canister, potentially posing a significant threat to the long-term safety of the deep disposal repository. Sulfate can exist in various forms within the deep disposal environment or be introduced from the geosphere. Sulfate has the potential to be transformed into sulfide by sulfate-reducing bacteria(SRB), and this converted sulfide can contribute to the corrosion of the copper canister. Bentonite, which is considered as a potential material for buffering and backfilling, contains oxidized sulfate minerals such as gypsum(CaSO4). If there is sufficient space for microorganisms to thrive in the deep disposal environment and if electron donors such as organic carbon are adequately supplied, sulfate can be converted to sulfide through microbial activity. However, the majority of the sulfides generated in the deep disposal system or introduced from the geosphere will be intercepted by the buffer, with only a small amount reaching the metal canister. Pyrite, one of the potential sulfide minerals present in the deep disposal environment, can generate sulfates during the dissolution process, thereby contributing to the corrosion of the copper canister. However, the quantity of oxidation byproducts from pyrite is anticipated to be minimal due to its extremely low solubility. Moreover, the migration of these oxidized byproducts to the metal canister will be restricted by the low hydraulic conductivity of saturated bentonite. We have comprehensively analyzed and summarized key research cases related to the presence of sulfates, reduction processes, and the formation and behavior characteristics of sulfides and pyrite in the deep disposal environment. Our objective was to gain an understanding of the impact of sulfates and sulfides on the long-term safety of high-level radioactive waste disposal repository.

Analysis of Uncertainty in Ocean Color Products by Water Vapor Vertical Profile (수증기 연직 분포에 의한 GOCI-II 해색 산출물 오차 분석)

  • Kyeong-Sang Lee;Sujung Bae;Eunkyung Lee;Jae-Hyun Ahn
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1591-1604
    • /
    • 2023
  • In ocean color remote sensing, atmospheric correction is a vital process for ensuring the accuracy and reliability of ocean color products. Furthermore, in recent years, the remote sensing community has intensified its requirements for understanding errors in satellite data. Accordingly, research is currently addressing errors in remote sensing reflectance (Rrs) resulting from inaccuracies in meteorological variables (total ozone, pressure, wind field, and total precipitable water) used as auxiliary data for atmospheric correction. However, there has been no investigation into the error in Rrs caused by the variability of the water vapor profile, despite it being a recognized error source. In this study, we used the Second Simulation of a Satellite Signal Vector version 2.1 simulation to compute errors in water vapor transmittance arising from variations in the water vapor profile within the GOCI-II observation area. Subsequently, we conducted an analysis of the associated errors in ocean color products. The observed water vapor profile not only exhibited a complex shape but also showed significant variations near the surface, leading to differences of up to 0.007 compared to the US standard 62 water vapor profile used in the GOCI-II atmospheric correction. The resulting variation in water vapor transmittance led to a difference in aerosol reflectance estimation, consequently introducing errors in Rrs across all GOCI-II bands. However, the error of Rrs in the 412-555 nm due to the difference in the water vapor profile band was found to be below 2%, which is lower than the required accuracy. Also, similar errors were shown in other ocean color products such as chlorophyll-a concentration, colored dissolved organic matter, and total suspended matter concentration. The results of this study indicate that the variability in water vapor profiles has minimal impact on the accuracy of atmospheric correction and ocean color products. Therefore, improving the accuracy of the input data related to the water vapor column concentration is even more critical for enhancing the accuracy of ocean color products in terms of water vapor absorption correction.

From a Defecation Alert System to a Smart Bottle: Understanding Lean Startup Methodology from the Case of Startup "L" (배변알리미에서 스마트바틀 출시까지: 스타트업 L사 사례로 본 린 스타트업 실천방안)

  • Sunkyung Park;Ju-Young Park
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.5
    • /
    • pp.91-107
    • /
    • 2023
  • Lean startup is a concept that combines the words "lean," meaning an efficient way of running a business, and "startup," meaning a new business. It is often cited as a strategy for minimizing failure in early-stage businesses, especially in software-based startups. By scrutinizing the case of a startup L, this study suggests that lean startup methodology(LSM) can be useful for hardware and manufacturing companies and identifies ways for early startups to successfully implement LSM. To this end, the study explained the core of LSM including the concepts of hypothesis-driven approach, BML feedback loop, minimum viable product(MVP), and pivot. Five criteria to evaluate the successful implementation of LSM were derived from the core concepts and applied to evaluate the case of startup L . The early startup L pivoted its main business model from defecation alert system for patients with limited mobility to one for infants or toddlers, and finally to a smart bottle for infants. In developing the former two products, analyzed from LSM's perspective, company L neither established a specific customer value proposition for its startup idea and nor verified it through MVP experiment, thus failed to create a BML feedback loop. However, through two rounds of pivots, startup L discovered new target customers and customer needs, and was able to establish a successful business model by repeatedly experimenting with MVPs with minimal effort and time. In other words, Company L's case shows that it is essential to go through the customer-market validation stage at the beginning of the business, and that it should be done through an MVP method that does not waste the startup's time and resources. It also shows that it is necessary to abandon and pivot a product or service that customers do not want, even if it is technically superior and functionally complete. Lastly, the study proves that the lean startup methodology is not limited to the software industry, but can also be applied to technology-based hardware industry. The findings of this study can be used as guidelines and methodologies for early-stage companies to minimize failures and to accelerate the process of establishing a business model, scaling up, and going global.

  • PDF

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.