• Title/Summary/Keyword: 4 parameter method

Search Result 1,888, Processing Time 0.034 seconds

Improvement in Regional Contractility of Myocardium after CABG (관상동맥 우회로 수술 환자에서 심근의 탄성도 변화)

  • Lee, Byeong-Il;Paeng, Jin-Chul;Lee, Dong-Soo;Lee, Jae-Sung;Chung, June-Key;Lee, Myung-Chul;Choi, Heung-Kook
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.4
    • /
    • pp.224-230
    • /
    • 2005
  • Purpose: The maximal elastance ($E_{max}$) of myocardium has been established as a reliable load-independent contractility index. Recently, we developed a noninvasive method to measure the regional contractility using gated myocardial SPECT and arterial tonometry data. In this study, we measured regional $E_{max}(rE_{max}$ in the patients who underwent coronary artery bypass graft surgery (CABG), and assessed its relationship with other variables. Materials and Methods: 21 patients (M:F=17:4, $58{\pm}12$ y) who underwent CABG were enrolled. $^{201}TI$ rest/dipyridamole stress $^{99m}Tc$-sestamibi gated SPECT were performed before and 3 months after CABG. For 15 myocardial regions, regional time-elastance curve was obtained using the pressure data of tonometry and the volume data of gated SPECT. To investigate the coupling with myocardial function, preoperative regional $E_{max}$ was compared with regional perfusion and systolic thickening. In addition, the correlation between $E_{max}$ and viability was assessed in dysfunctional segments (thickening <20% before CABG). The viability was defined as improvement of postoperative systolic thickening more than 10%. Results: Regional $E_{max}$ was slightly increased after CABG from $2.41{\pm}1.64 (pre)\;to\;2.78{\pm}1.83 (post)$ mmHg/ml. $E_{max}$ had weak correlation with perfusion and thickening (r=0.35, p<0.001). In the regions of preserved perfusion (${\geq}60%$), $E_{max}$ was $2.65{\pm}1.67$, while it was $1.30{\pm}1.24$ in the segments of decreased perfusion. With regard to thickening, $E_{max}$ was $3.01{\pm}1.92$ mmHg/ml for normal regions (thickening ${geq}40%$), $2.40{\pm}1.19$ mmHg/ml for mildly dysfunctional regions (<40% and ${\geq}20%$), and $1.13{\pm}0.89$ mmHg/ml for severely dysfunctional regions (<20%). $E_{max}$ was improved after CABG in both the viable (from $1.27{\pm}1.07\;to\;1.79{\pm}1.48$ mmHg/ml) and non-viable segments (from $0.97 {\pm}0.59\;to\;1.22{\pm}0.71$ mmHg/ml), but there was no correlation between $E_{max}$ and thickening improvements (r=0.007). Conclusions: Preoperative regional $E_{max}$ was relatively concordant with regional perfusion and systolic thickening on gated myocardial SPECT. In dysfunctional but viable segments, $E_{max}$ was improved after CABG, but showed no correlation with thickening improvement. As a load-independent contractility index of dysfunctional myocardial segments, we suggest that the regional $E_{max}$ could be an independent parameter in the assessment of myocardial function.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Geochemical Characteristics of the Gyeongju LILW Repository II. Rock and Mineral (중.저준위 방사성폐기물 처분부지의 지구화학 특성 II. 암석 및 광물)

  • Kim, Geon-Young;Koh, Yong-Kwon;Choi, Byoung-Young;Shin, Seon-Ho;Kim, Doo-Haeng
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.6 no.4
    • /
    • pp.307-327
    • /
    • 2008
  • Geochemical study on the rocks and minerals of the Gyeongju low and intermediate level waste repository was carried out in order to provide geochemical data for the safety assessment and geochemical modeling. Polarized microscopy, X-ray diffraction method, chemical analysis for the major and trace elements, scanning electron microscopy(SEM), and stable isotope analysis were applied. Fracture zones are locally developed with various degrees of alteration in the study area. The study area is mainly composed of granodiorite and diorite and their relation is gradational in the field. However, they could be easily distinguished by their chemical property. The granodiorite showed higher $SiO_2$ content and lower MgO and $Fe_2O_3$ contents than the diorite. Variation trends of the major elements of the granodiorite and diorite were plotted on the same line according to the increase of $SiO_2$ content suggesting that they were differentiated from the same magma. Spatial distribution of the various elements showed that the diorite region had lower $SiO_2,\;Al_2O_3,\;Na_2O\;and\;K_2O$ contents, and higher CaO, $Fe_2O_3$ contents than the granodiorite region. Especially, because the differences in the CaO and $Na_2O$ distribution were most distinct and their trends were reciprocal, the chemical variation of the plagioclase of the granitic rocks was the main parameter of the chemical variation of the host rocks in the study area. Identified fracture-filling minerals from the drill core were montmorillonite, zeolite minerals, chlorite, illite, calcite and pyrite. Especially pyrite and laumontite, which are known as indicating minerals of hydrothermal alteration, were widely distributed in the study area indicating that the study area was affected by mineralization and/or hydrothermal alteration. Sulfur isotope analysis for the pyrite and oxygen-hydrogen stable isotope analysis for the clay minerals indicated that they were originated from the magma. Therefore, it is considered that the fracture-filling minerals from the study area were affected by the hydrothermal solution as well as the simply water-rock interaction.

  • PDF

Product Recommender Systems using Multi-Model Ensemble Techniques (다중모형조합기법을 이용한 상품추천시스템)

  • Lee, Yeonjeong;Kim, Kyoung-Jae
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.39-54
    • /
    • 2013
  • Recent explosive increase of electronic commerce provides many advantageous purchase opportunities to customers. In this situation, customers who do not have enough knowledge about their purchases, may accept product recommendations. Product recommender systems automatically reflect user's preference and provide recommendation list to the users. Thus, product recommender system in online shopping store has been known as one of the most popular tools for one-to-one marketing. However, recommender systems which do not properly reflect user's preference cause user's disappointment and waste of time. In this study, we propose a novel recommender system which uses data mining and multi-model ensemble techniques to enhance the recommendation performance through reflecting the precise user's preference. The research data is collected from the real-world online shopping store, which deals products from famous art galleries and museums in Korea. The data initially contain 5759 transaction data, but finally remain 3167 transaction data after deletion of null data. In this study, we transform the categorical variables into dummy variables and exclude outlier data. The proposed model consists of two steps. The first step predicts customers who have high likelihood to purchase products in the online shopping store. In this step, we first use logistic regression, decision trees, and artificial neural networks to predict customers who have high likelihood to purchase products in each product group. We perform above data mining techniques using SAS E-Miner software. In this study, we partition datasets into two sets as modeling and validation sets for the logistic regression and decision trees. We also partition datasets into three sets as training, test, and validation sets for the artificial neural network model. The validation dataset is equal for the all experiments. Then we composite the results of each predictor using the multi-model ensemble techniques such as bagging and bumping. Bagging is the abbreviation of "Bootstrap Aggregation" and it composite outputs from several machine learning techniques for raising the performance and stability of prediction or classification. This technique is special form of the averaging method. Bumping is the abbreviation of "Bootstrap Umbrella of Model Parameter," and it only considers the model which has the lowest error value. The results show that bumping outperforms bagging and the other predictors except for "Poster" product group. For the "Poster" product group, artificial neural network model performs better than the other models. In the second step, we use the market basket analysis to extract association rules for co-purchased products. We can extract thirty one association rules according to values of Lift, Support, and Confidence measure. We set the minimum transaction frequency to support associations as 5%, maximum number of items in an association as 4, and minimum confidence for rule generation as 10%. This study also excludes the extracted association rules below 1 of lift value. We finally get fifteen association rules by excluding duplicate rules. Among the fifteen association rules, eleven rules contain association between products in "Office Supplies" product group, one rules include the association between "Office Supplies" and "Fashion" product groups, and other three rules contain association between "Office Supplies" and "Home Decoration" product groups. Finally, the proposed product recommender systems provides list of recommendations to the proper customers. We test the usability of the proposed system by using prototype and real-world transaction and profile data. For this end, we construct the prototype system by using the ASP, Java Script and Microsoft Access. In addition, we survey about user satisfaction for the recommended product list from the proposed system and the randomly selected product lists. The participants for the survey are 173 persons who use MSN Messenger, Daum Caf$\acute{e}$, and P2P services. We evaluate the user satisfaction using five-scale Likert measure. This study also performs "Paired Sample T-test" for the results of the survey. The results show that the proposed model outperforms the random selection model with 1% statistical significance level. It means that the users satisfied the recommended product list significantly. The results also show that the proposed system may be useful in real-world online shopping store.

Simultaneous Determination of Aminoglycoside Antibiotics in Meat using Liquid Chromatography Tandem Mass Spectrometry (LC-MS/MS를 이용한 육류 중 아미노글리코사이드계 항생제 9종의 동시분석 및 적용성 검증)

  • Cho, Yoon-Jae;Choi, Sun-Ju;Kim, Myeong-Ae;Kim, MeeKyung;Yoon, Su-Jin;Chang, Moon-Ik;Lee, Sang-Mok;Kim, Hee-Jeong;Jeong, Jiyoon;Rhee, Gyu-Seek;Lee, Sang-Jae
    • Journal of Food Hygiene and Safety
    • /
    • v.29 no.2
    • /
    • pp.123-130
    • /
    • 2014
  • A simultaneous determination was developed for 9 aminoglycoside antibiotics (amikacin, apramycin, dihydrostreptomycin, gentamicin, hygromycin B, kanamycin, neomycin, spectinomycin, and streptomycin) in meat by liquid chromatography tandem mass spectrometry (LC-MS/MS). Each parameter was established by multiple reaction monitoring in positive ion mode. The developed method was validated for specificity, linearity, accuracy, and precision based on CODEX validation guideline. Linearity was over 0.98 with calibration curves of the mixed standards. Recovery of 9 aminoglycosides ranged on 60.5~114% for beef, 60.1~112% for pork and 63.8~131% for chicken. The limit of detection (LOD) and limit of quantification (LOQ) were 0.001~0.009 mg/kg and 0.006~0.03 mg/kg, respectively in livestock products including beef, pork and chicken. This study also performed survey of residual aminoglycoside antibiotics for 193 samples of beef, pork and chicken collected from 9 cities in Korea. Aminoglycosides were not found in any of the samples.

Derivation of the Synthetic Unit Hydrograph Based on the Watershed Characteristics (유역특성에 의한 합성단위도의 유도에 관한 연구)

  • 서승덕
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.17 no.1
    • /
    • pp.3642-3654
    • /
    • 1975
  • The purpose of this thesis is to derive a unit hydrograph which may be applied to the ungaged watershed area from the relations between directly measurable unitgraph properties such as peak discharge(qp), time to peak discharge (Tp), and lag time (Lg) and watershed characteristics such as river length(L) from the given station to the upstream limits of the watershed area in km, river length from station to centroid of gravity of the watershed area in km (Lca), and main stream slope in meter per km (S). Other procedure based on routing a time-area diagram through catchment storage named Instantaneous Unit Hydrograph(IUH). Dimensionless unitgraph also analysed in brief. The basic data (1969 to 1973) used in these studies are 9 recording level gages and rating curves, 41 rain gages and pluviographs, and 40 observed unitgraphs through the 9 sub watersheds in Nak Oong River basin. The results summarized in these studies are as follows; 1. Time in hour from start of rise to peak rate (Tp) generally occured at the position of 0.3Tb (time base of hydrograph) with some indication of higher values for larger watershed. The base flow is comparelatively higher than the other small watershed area. 2. Te losses from rainfall were divided into initial loss and continuing loss. Initial loss may be defined as that portion of storm rainfall which is intercepted by vegetation, held in deppression storage or infiltrated at a high rate early in the storm and continuing loss is defined as the loss which continues at a constant rate throughout the duration of the storm after the initial loss has been satisfied. Tis continuing loss approximates the nearly constant rate of infiltration (${\Phi}$-index method). The loss rate from this analysis was estimated 50 Per cent to the rainfall excess approximately during the surface runoff occured. 3. Stream slope seems approximate, as is usual, to consider the mainstreamonly, not giving any specific consideration to tributary. It is desirable to develop a single measure of slope that is representative of the who1e stream. The mean slope of channel increment in 1 meter per 200 meters and 1 meter per 1400 meters were defined at Gazang and Jindong respectively. It is considered that the slopes are low slightly in the light of other river studies. Flood concentration rate might slightly be low in the Nak Dong river basin. 4. It found that the watershed lag (Lg, hrs) could be expressed by Lg=0.253 (L.Lca)0.4171 The product L.Lca is a measure of the size and shape of the watershed. For the logarithms, the correlation coefficient for Lg was 0.97 which defined that Lg is closely related with the watershed characteristics, L and Lca. 5. Expression for basin might be expected to take form containing theslope as {{{{ { L}_{g }=0.545 {( { L. { L}_{ca } } over { SQRT {s} } ) }^{0.346 } }}}} For the logarithms, the correlation coefficient for Lg was 0.97 which defined that Lg is closely related with the basin characteristics too. It should be needed to take care of analysis which relating to the mean slopes 6. Peak discharge per unit area of unitgraph for standard duration tr, ㎥/sec/$\textrm{km}^2$, was given by qp=10-0.52-0.0184Lg with a indication of lower values for watershed contrary to the higher lag time. For the logarithms, the correlation coefficient qp was 0.998 which defined high sign ificance. The peak discharge of the unitgraph for an area could therefore be expected to take the from Qp=qp. A(㎥/sec). 7. Using the unitgraph parameter Lg, the base length of the unitgraph, in days, was adopted as {{{{ {T}_{b } =0.73+2.073( { { L}_{g } } over {24 } )}}}} with high significant correlation coefficient, 0.92. The constant of the above equation are fixed by the procedure used to separate base flow from direct runoff. 8. The width W75 of the unitgraph at discharge equal to 75 per cent of the peak discharge, in hours and the width W50 at discharge equal to 50 Per cent of the peak discharge in hours, can be estimated from {{{{ { W}_{75 }= { 1.61} over { { q}_{b } ^{1.05 } } }}}} and {{{{ { W}_{50 }= { 2.5} over { { q}_{b } ^{1.05 } } }}}} respectively. This provides supplementary guide for sketching the unitgraph. 9. Above equations define the three factors necessary to construct the unitgraph for duration tr. For the duration tR, the lag is LgR=Lg+0.2(tR-tr) and this modified lag, LgRis used in qp and Tb It the tr happens to be equal to or close to tR, further assume qpR=qp. 10. Triangular hydrograph is a dimensionless unitgraph prepared from the 40 unitgraphs. The equation is shown as {{{{ { q}_{p } = { K.A.Q} over { { T}_{p } } }}}} or {{{{ { q}_{p } = { 0.21A.Q} over { { T}_{p } } }}}} The constant 0.21 is defined to Nak Dong River basin. 11. The base length of the time-area diagram for the IUH routing is {{{{C=0.9 {( { L. { L}_{ca } } over { SQRT { s} } ) }^{1/3 } }}}}. Correlation coefficient for C was 0.983 which defined a high significance. The base length of the T-AD was set to equal the time from the midpoint of rain fall excess to the point of contraflexure. The constant K, derived in this studies is K=8.32+0.0213 {{{{ { L} over { SQRT { s} } }}}} with correlation coefficient, 0.964. 12. In the light of the results analysed in these studies, average errors in the peak discharge of the Synthetic unitgraph, Triangular unitgraph, and IUH were estimated as 2.2, 7.7 and 6.4 per cent respectively to the peak of observed average unitgraph. Each ordinate of the Synthetic unitgraph was approached closely to the observed one.

  • PDF

Pipetting Stability and Improvement Test of the Robotic Liquid Handling System Depending on Types of Liquid (용액에 따른 자동분주기의 분주능력 평가와 분주력 향상 실험)

  • Back, Hyangmi;Kim, Youngsan;Yun, Sunhee;Heo, Uisung;Kim, Hosin;Ryu, Hyeonggi;Lee, Guiwon
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.20 no.2
    • /
    • pp.62-68
    • /
    • 2016
  • Purpose In a cyclosporine experiment using a robotic liquid handing system has found a deviation of its standard curve and low reproducibility of patients's results. The difference of the test is that methanol is mixed with samples and the extractions are used for the test. Therefore, we assumed that the abnormal test results came from using methanol and conducted this test. In a manual of a robotic liquid handling system mentions that we can choose several setting parameters depending on the viscosity of the liquids being used, the size of the sampling tips and the motor speeds that you elect to use but there's no exact order. This study was undertaken to confirm pipetting ability depending on types of liquids and investigate proper setting parameters for the optimum dispensing ability. Materials and Methods 4types of liquids(water, serum, methanol, PEG 6000(25%)) and $TSH^{125}I$ tracer(515 kBq) are used to confirm pipetting ability. 29 specimens for Cyclosporine test are used to compare results. Prepare 8 plastic tubes for each of the liquids and with multi pipette $400{\mu}l$ of each liquid is dispensed to 8 tubes and $100{\mu}l$ of $TSH^{125}I$ tracer are dispensed to all of the tubes. From the prepared samples, $100{\mu}l$ of liquids are dispensed using a robotic liquid handing system, counted and calculated its CV(%) depending on types of liquids. And then by adjusting several setting parameters(air gap, dispense time, delay time) the change of the CV(%)are calcutated and finds optimum setting parameters. 29 specimens are tested with 3 methods. The first(A) is manual method and the second(B) is used robotic liquid handling system with existing parameters. The third(C) is used robotic liquid handling system with adjusted parameters. Pipetting ability depending on types of liquids is assessed with CV(%). On the basis of (A), patients's test results are compared (A)and(B), (A)and(C) and they are assessed with %RE(%Relative error) and %Diff(%Difference). Results The CV(%) of the CPM depending on liquid types were water 0.88, serum 0.95, methanol 10.22 and PEG 0.68. As expected dispensing of methanol using a liquid handling system was the problem and others were good. The methanol's dispensing were conducted by adjusting several setting parameters. When transport air gap 0 was adjusted to 2 and 5, CV(%) were 20.16, 12.54 and when system air gap 0 was adjusted to 2 and 5, CV(%) were 8.94, 1.36. When adjusted to system air gap 2, transport air gap 2 was 12.96 and adjusted to system air gap 5, Transport air gap 5 was 1.33. When dispense speed was adjusted 300 to 100, CV(%) was 13.32 and when dispense delay was adjusted 200 to 100 was 13.55. When compared (B) to (A), the result increased 99.44% and %RE was 93.59%. When compared (C-system air gap was adjusted 0 to 5) to (A), the result increased 6.75% and %RE was 5.10%. Conclusion Adjusting speed and delay time of aspiration and dispense was meaningless but changing system air gap was effective. By adjusting several parameters proper value was found and it affected the practical result of the experiment. To optimize the system active efforts are needed through the test and in case of dispensing new types of liquids proper test is required to check the liquid is suitable for using the equipment.

  • PDF

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.