• Title/Summary/Keyword: Predictive Accuracy

Search Result 792, Processing Time 0.03 seconds

Dynamic forecasts of bankruptcy with Recurrent Neural Network model (RNN(Recurrent Neural Network)을 이용한 기업부도예측모형에서 회계정보의 동적 변화 연구)

  • Kwon, Hyukkun;Lee, Dongkyu;Shin, Minsoo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.139-153
    • /
    • 2017
  • Corporate bankruptcy can cause great losses not only to stakeholders but also to many related sectors in society. Through the economic crises, bankruptcy have increased and bankruptcy prediction models have become more and more important. Therefore, corporate bankruptcy has been regarded as one of the major topics of research in business management. Also, many studies in the industry are in progress and important. Previous studies attempted to utilize various methodologies to improve the bankruptcy prediction accuracy and to resolve the overfitting problem, such as Multivariate Discriminant Analysis (MDA), Generalized Linear Model (GLM). These methods are based on statistics. Recently, researchers have used machine learning methodologies such as Support Vector Machine (SVM), Artificial Neural Network (ANN). Furthermore, fuzzy theory and genetic algorithms were used. Because of this change, many of bankruptcy models are developed. Also, performance has been improved. In general, the company's financial and accounting information will change over time. Likewise, the market situation also changes, so there are many difficulties in predicting bankruptcy only with information at a certain point in time. However, even though traditional research has problems that don't take into account the time effect, dynamic model has not been studied much. When we ignore the time effect, we get the biased results. So the static model may not be suitable for predicting bankruptcy. Thus, using the dynamic model, there is a possibility that bankruptcy prediction model is improved. In this paper, we propose RNN (Recurrent Neural Network) which is one of the deep learning methodologies. The RNN learns time series data and the performance is known to be good. Prior to experiment, we selected non-financial firms listed on the KOSPI, KOSDAQ and KONEX markets from 2010 to 2016 for the estimation of the bankruptcy prediction model and the comparison of forecasting performance. In order to prevent a mistake of predicting bankruptcy by using the financial information already reflected in the deterioration of the financial condition of the company, the financial information was collected with a lag of two years, and the default period was defined from January to December of the year. Then we defined the bankruptcy. The bankruptcy we defined is the abolition of the listing due to sluggish earnings. We confirmed abolition of the list at KIND that is corporate stock information website. Then we selected variables at previous papers. The first set of variables are Z-score variables. These variables have become traditional variables in predicting bankruptcy. The second set of variables are dynamic variable set. Finally we selected 240 normal companies and 226 bankrupt companies at the first variable set. Likewise, we selected 229 normal companies and 226 bankrupt companies at the second variable set. We created a model that reflects dynamic changes in time-series financial data and by comparing the suggested model with the analysis of existing bankruptcy predictive models, we found that the suggested model could help to improve the accuracy of bankruptcy predictions. We used financial data in KIS Value (Financial database) and selected Multivariate Discriminant Analysis (MDA), Generalized Linear Model called logistic regression (GLM), Support Vector Machine (SVM), Artificial Neural Network (ANN) model as benchmark. The result of the experiment proved that RNN's performance was better than comparative model. The accuracy of RNN was high in both sets of variables and the Area Under the Curve (AUC) value was also high. Also when we saw the hit-ratio table, the ratio of RNNs that predicted a poor company to be bankrupt was higher than that of other comparative models. However the limitation of this paper is that an overfitting problem occurs during RNN learning. But we expect to be able to solve the overfitting problem by selecting more learning data and appropriate variables. From these result, it is expected that this research will contribute to the development of a bankruptcy prediction by proposing a new dynamic model.

The Intelligent Determination Model of Audience Emotion for Implementing Personalized Exhibition (개인화 전시 서비스 구현을 위한 지능형 관객 감정 판단 모형)

  • Jung, Min-Kyu;Kim, Jae-Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.39-57
    • /
    • 2012
  • Recently, due to the introduction of high-tech equipment in interactive exhibits, many people's attention has been concentrated on Interactive exhibits that can double the exhibition effect through the interaction with the audience. In addition, it is also possible to measure a variety of audience reaction in the interactive exhibition. Among various audience reactions, this research uses the change of the facial features that can be collected in an interactive exhibition space. This research develops an artificial neural network-based prediction model to predict the response of the audience by measuring the change of the facial features when the audience is given stimulation from the non-excited state. To present the emotion state of the audience, this research uses a Valence-Arousal model. So, this research suggests an overall framework composed of the following six steps. The first step is a step of collecting data for modeling. The data was collected from people participated in the 2012 Seoul DMC Culture Open, and the collected data was used for the experiments. The second step extracts 64 facial features from the collected data and compensates the facial feature values. The third step generates independent and dependent variables of an artificial neural network model. The fourth step extracts the independent variable that affects the dependent variable using the statistical technique. The fifth step builds an artificial neural network model and performs a learning process using train set and test set. Finally the last sixth step is to validate the prediction performance of artificial neural network model using the validation data set. The proposed model is compared with statistical predictive model to see whether it had better performance or not. As a result, although the data set in this experiment had much noise, the proposed model showed better results when the model was compared with multiple regression analysis model. If the prediction model of audience reaction was used in the real exhibition, it will be able to provide countermeasures and services appropriate to the audience's reaction viewing the exhibits. Specifically, if the arousal of audience about Exhibits is low, Action to increase arousal of the audience will be taken. For instance, we recommend the audience another preferred contents or using a light or sound to focus on these exhibits. In other words, when planning future exhibitions, planning the exhibition to satisfy various audience preferences would be possible. And it is expected to foster a personalized environment to concentrate on the exhibits. But, the proposed model in this research still shows the low prediction accuracy. The cause is in some parts as follows : First, the data covers diverse visitors of real exhibitions, so it was difficult to control the optimized experimental environment. So, the collected data has much noise, and it would results a lower accuracy. In further research, the data collection will be conducted in a more optimized experimental environment. The further research to increase the accuracy of the predictions of the model will be conducted. Second, using changes of facial expression only is thought to be not enough to extract audience emotions. If facial expression is combined with other responses, such as the sound, audience behavior, it would result a better result.

Mathematical Transformation Influencing Accuracy of Near Infrared Spectroscopy (NIRS) Calibrations for the Prediction of Chemical Composition and Fermentation Parameters in Corn Silage (수 처리 방법이 근적외선분광법을 이용한 옥수수 사일리지의 화학적 조성분 및 발효품질의 예측 정확성에 미치는 영향)

  • Park, Hyung-Soo;Kim, Ji-Hye;Choi, Ki-Choon;Kim, Hyeon-Seop
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.36 no.1
    • /
    • pp.50-57
    • /
    • 2016
  • This study was conducted to determine the effect of mathematical transformation on near infrared spectroscopy (NIRS) calibrations for the prediction of chemical composition and fermentation parameters in corn silage. Corn silage samples (n=407) were collected from cattle farms and feed companies in Korea between 2014 and 2015. Samples of silage were scanned at 1 nm intervals over the wavelength range of 680~2,500 nm. The optical data were recorded as log 1/Reflectance (log 1/R) and scanned in intact fresh condition. The spectral data were regressed against a range of chemical parameters using partial least squares (PLS) multivariate analysis in conjunction with several spectral math treatments to reduce the effect of extraneous noise. The optimum calibrations were selected based on the highest coefficients of determination in cross validation ($R^2{_{cv}}$) and the lowest standard error of cross validation (SECV). Results of this study revealed that the NIRS method could be used to predict chemical constituents accurately (correlation coefficient of cross validation, $R^2{_{cv}}$, ranging from 0.77 to 0.91). The best mathematical treatment for moisture and crude protein (CP) was first-order derivatives (1, 16, 16, and 1, 4, 4), whereas the best mathematical treatment for neutral detergent fiber (NDF) and acid detergent fiber (ADF) was 2, 16, 16. The calibration models for fermentation parameters had lower predictive accuracy than chemical constituents. However, pH and lactic acids were predicted with considerable accuracy ($R^2{_{cv}}$ 0.74 to 0.77). The best mathematical treatment for them was 1, 8, 8 and 2, 16, 16, respectively. Results of this experiment demonstrate that it is possible to use NIRS method to predict the chemical composition and fermentation quality of fresh corn silages as a routine analysis method for feeding value evaluation to give advice to farmers.

The prediction of the stock price movement after IPO using machine learning and text analysis based on TF-IDF (증권신고서의 TF-IDF 텍스트 분석과 기계학습을 이용한 공모주의 상장 이후 주가 등락 예측)

  • Yang, Suyeon;Lee, Chaerok;Won, Jonggwan;Hong, Taeho
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.237-262
    • /
    • 2022
  • There has been a growing interest in IPOs (Initial Public Offerings) due to the profitable returns that IPO stocks can offer to investors. However, IPOs can be speculative investments that may involve substantial risk as well because shares tend to be volatile, and the supply of IPO shares is often highly limited. Therefore, it is crucially important that IPO investors are well informed of the issuing firms and the market before deciding whether to invest or not. Unlike institutional investors, individual investors are at a disadvantage since there are few opportunities for individuals to obtain information on the IPOs. In this regard, the purpose of this study is to provide individual investors with the information they may consider when making an IPO investment decision. This study presents a model that uses machine learning and text analysis to predict whether an IPO stock price would move up or down after the first 5 trading days. Our sample includes 691 Korean IPOs from June 2009 to December 2020. The input variables for the prediction are three tone variables created from IPO prospectuses and quantitative variables that are either firm-specific, issue-specific, or market-specific. The three prospectus tone variables indicate the percentage of positive, neutral, and negative sentences in a prospectus, respectively. We considered only the sentences in the Risk Factors section of a prospectus for the tone analysis in this study. All sentences were classified into 'positive', 'neutral', and 'negative' via text analysis using TF-IDF (Term Frequency - Inverse Document Frequency). Measuring the tone of each sentence was conducted by machine learning instead of a lexicon-based approach due to the lack of sentiment dictionaries suitable for Korean text analysis in the context of finance. For this reason, the training set was created by randomly selecting 10% of the sentences from each prospectus, and the sentence classification task on the training set was performed after reading each sentence in person. Then, based on the training set, a Support Vector Machine model was utilized to predict the tone of sentences in the test set. Finally, the machine learning model calculated the percentages of positive, neutral, and negative sentences in each prospectus. To predict the price movement of an IPO stock, four different machine learning techniques were applied: Logistic Regression, Random Forest, Support Vector Machine, and Artificial Neural Network. According to the results, models that use quantitative variables using technical analysis and prospectus tone variables together show higher accuracy than models that use only quantitative variables. More specifically, the prediction accuracy was improved by 1.45% points in the Random Forest model, 4.34% points in the Artificial Neural Network model, and 5.07% points in the Support Vector Machine model. After testing the performance of these machine learning techniques, the Artificial Neural Network model using both quantitative variables and prospectus tone variables was the model with the highest prediction accuracy rate, which was 61.59%. The results indicate that the tone of a prospectus is a significant factor in predicting the price movement of an IPO stock. In addition, the McNemar test was used to verify the statistically significant difference between the models. The model using only quantitative variables and the model using both the quantitative variables and the prospectus tone variables were compared, and it was confirmed that the predictive performance improved significantly at a 1% significance level.

Classification Algorithm-based Prediction Performance of Order Imbalance Information on Short-Term Stock Price (분류 알고리즘 기반 주문 불균형 정보의 단기 주가 예측 성과)

  • Kim, S.W.
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.157-177
    • /
    • 2022
  • Investors are trading stocks by keeping a close watch on the order information submitted by domestic and foreign investors in real time through Limit Order Book information, so-called price current provided by securities firms. Will order information released in the Limit Order Book be useful in stock price prediction? This study analyzes whether it is significant as a predictor of future stock price up or down when order imbalances appear as investors' buying and selling orders are concentrated to one side during intra-day trading time. Using classification algorithms, this study improved the prediction accuracy of the order imbalance information on the short-term price up and down trend, that is the closing price up and down of the day. Day trading strategies are proposed using the predicted price trends of the classification algorithms and the trading performances are analyzed through empirical analysis. The 5-minute KOSPI200 Index Futures data were analyzed for 4,564 days from January 19, 2004 to June 30, 2022. The results of the empirical analysis are as follows. First, order imbalance information has a significant impact on the current stock prices. Second, the order imbalance information observed in the early morning has a significant forecasting power on the price trends from the early morning to the market closing time. Third, the Support Vector Machines algorithm showed the highest prediction accuracy on the day's closing price trends using the order imbalance information at 54.1%. Fourth, the order imbalance information measured at an early time of day had higher prediction accuracy than the order imbalance information measured at a later time of day. Fifth, the trading performances of the day trading strategies using the prediction results of the classification algorithms on the price up and down trends were higher than that of the benchmark trading strategy. Sixth, except for the K-Nearest Neighbor algorithm, all investment performances using the classification algorithms showed average higher total profits than that of the benchmark strategy. Seventh, the trading performances using the predictive results of the Logical Regression, Random Forest, Support Vector Machines, and XGBoost algorithms showed higher results than the benchmark strategy in the Sharpe Ratio, which evaluates both profitability and risk. This study has an academic difference from existing studies in that it documented the economic value of the total buy & sell order volume information among the Limit Order Book information. The empirical results of this study are also valuable to the market participants from a trading perspective. In future studies, it is necessary to improve the performance of the trading strategy using more accurate price prediction results by expanding to deep learning models which are actively being studied for predicting stock prices recently.

Clickstream Big Data Mining for Demographics based Digital Marketing (인구통계특성 기반 디지털 마케팅을 위한 클릭스트림 빅데이터 마이닝)

  • Park, Jiae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.143-163
    • /
    • 2016
  • The demographics of Internet users are the most basic and important sources for target marketing or personalized advertisements on the digital marketing channels which include email, mobile, and social media. However, it gradually has become difficult to collect the demographics of Internet users because their activities are anonymous in many cases. Although the marketing department is able to get the demographics using online or offline surveys, these approaches are very expensive, long processes, and likely to include false statements. Clickstream data is the recording an Internet user leaves behind while visiting websites. As the user clicks anywhere in the webpage, the activity is logged in semi-structured website log files. Such data allows us to see what pages users visited, how long they stayed there, how often they visited, when they usually visited, which site they prefer, what keywords they used to find the site, whether they purchased any, and so forth. For such a reason, some researchers tried to guess the demographics of Internet users by using their clickstream data. They derived various independent variables likely to be correlated to the demographics. The variables include search keyword, frequency and intensity for time, day and month, variety of websites visited, text information for web pages visited, etc. The demographic attributes to predict are also diverse according to the paper, and cover gender, age, job, location, income, education, marital status, presence of children. A variety of data mining methods, such as LSA, SVM, decision tree, neural network, logistic regression, and k-nearest neighbors, were used for prediction model building. However, this research has not yet identified which data mining method is appropriate to predict each demographic variable. Moreover, it is required to review independent variables studied so far and combine them as needed, and evaluate them for building the best prediction model. The objective of this study is to choose clickstream attributes mostly likely to be correlated to the demographics from the results of previous research, and then to identify which data mining method is fitting to predict each demographic attribute. Among the demographic attributes, this paper focus on predicting gender, age, marital status, residence, and job. And from the results of previous research, 64 clickstream attributes are applied to predict the demographic attributes. The overall process of predictive model building is compose of 4 steps. In the first step, we create user profiles which include 64 clickstream attributes and 5 demographic attributes. The second step performs the dimension reduction of clickstream variables to solve the curse of dimensionality and overfitting problem. We utilize three approaches which are based on decision tree, PCA, and cluster analysis. We build alternative predictive models for each demographic variable in the third step. SVM, neural network, and logistic regression are used for modeling. The last step evaluates the alternative models in view of model accuracy and selects the best model. For the experiments, we used clickstream data which represents 5 demographics and 16,962,705 online activities for 5,000 Internet users. IBM SPSS Modeler 17.0 was used for our prediction process, and the 5-fold cross validation was conducted to enhance the reliability of our experiments. As the experimental results, we can verify that there are a specific data mining method well-suited for each demographic variable. For example, age prediction is best performed when using the decision tree based dimension reduction and neural network whereas the prediction of gender and marital status is the most accurate by applying SVM without dimension reduction. We conclude that the online behaviors of the Internet users, captured from the clickstream data analysis, could be well used to predict their demographics, thereby being utilized to the digital marketing.

Effect of the Changing the Lower Limits of Normal and the Interpretative Strategies for Lung Function Tests (폐기능검사 해석에 정상하한치 변화와 새 해석흐름도가 미치는 영향)

  • Ra, Seung Won;Oh, Ji Seon;Hong, Sang-Bum;Shim, Tae Sun;Lim, Chae Man;Koh, Youn Suck;Lee, Sang Do;Kim, Woo Sung;Kim, Dong-Soon;Kim, Won Dong;Oh, Yeon-Mok
    • Tuberculosis and Respiratory Diseases
    • /
    • v.61 no.2
    • /
    • pp.129-136
    • /
    • 2006
  • Background: To interpret lung function tests, it is necessary to determine the lower limits of normal (LLN) and to derive a consensus on the interpretative algorithm. '0.7 of LLN for the $FEV_1$/FVC' was suggested by the COPD International Guideline (GOLD) for defining obstructive disease. A consensus on a new interpretative algorithm was recently achieved by ATS/ERS in 2005. We evaluated the accuracy of '0.7 of LLN for the $FEV_1$/FVC' for diagnosing obstructive diseases, and we also determined the effect of the new algorithm on diagnosing ventilatory defects. Methods: We obtained the age, gender, height, weight, $FEV_1$, FVC, and $FEV_1$/FVC from 7362 subjects who underwent spirometry in 2005 at the Asan Medical Center, Korea. For diagnosing obstructive diseases, the accuracy of '0.7 of LLN for the $FEV_1$/FVC' was evaluated in reference to the $5^{th}$ percentile of the LLN. By applying the new algorithm, we determined how many more subjects should have lung volumes testing performed. Evaluation of 1611 patients who had lung volumes testing performed as well as spirometry during the period showed how many more subjects were diagnosed with obstructive diseases according to the new algorithm. Results: 1) The sensitivity of '0.7 of LLN for the $FEV_1$/FVC' for diagnosing obstructive diseases increased according to age, but the specificity was decreased according to age; the positive predictive value decreased, but the negative predictive value increased. 2) By applying the new algorithm, 34.5% (2540/7362) more subjects should have lung volumes testing performed. 3) By applying the new algorithm, 13% (205/1611) more subjects were diagnosed with obstructive diseases; these subjects corresponded to 30% (205/681) of the subjects who had been diagnosed with restrictive diseases by the old interpretative algorithm. Conclusion: The sensitivity and specificity of '0.7 of LLN for the $FEV_1$/FVC' for diagnosing obstructive diseases changes according to age. By applying the new interpretative algorithm, it was shown that more subjects should have lung volumes testing performed, and there was a higher probability of being diagnosed with obstructive diseases.

A study on the prediction of korean NPL market return (한국 NPL시장 수익률 예측에 관한 연구)

  • Lee, Hyeon Su;Jeong, Seung Hwan;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.123-139
    • /
    • 2019
  • The Korean NPL market was formed by the government and foreign capital shortly after the 1997 IMF crisis. However, this market is short-lived, as the bad debt has started to increase after the global financial crisis in 2009 due to the real economic recession. NPL has become a major investment in the market in recent years when the domestic capital market's investment capital began to enter the NPL market in earnest. Although the domestic NPL market has received considerable attention due to the overheating of the NPL market in recent years, research on the NPL market has been abrupt since the history of capital market investment in the domestic NPL market is short. In addition, decision-making through more scientific and systematic analysis is required due to the decline in profitability and the price fluctuation due to the fluctuation of the real estate business. In this study, we propose a prediction model that can determine the achievement of the benchmark yield by using the NPL market related data in accordance with the market demand. In order to build the model, we used Korean NPL data from December 2013 to December 2017 for about 4 years. The total number of things data was 2291. As independent variables, only the variables related to the dependent variable were selected for the 11 variables that indicate the characteristics of the real estate. In order to select the variables, one to one t-test and logistic regression stepwise and decision tree were performed. Seven independent variables (purchase year, SPC (Special Purpose Company), municipality, appraisal value, purchase cost, OPB (Outstanding Principle Balance), HP (Holding Period)). The dependent variable is a bivariate variable that indicates whether the benchmark rate is reached. This is because the accuracy of the model predicting the binomial variables is higher than the model predicting the continuous variables, and the accuracy of these models is directly related to the effectiveness of the model. In addition, in the case of a special purpose company, whether or not to purchase the property is the main concern. Therefore, whether or not to achieve a certain level of return is enough to make a decision. For the dependent variable, we constructed and compared the predictive model by calculating the dependent variable by adjusting the numerical value to ascertain whether 12%, which is the standard rate of return used in the industry, is a meaningful reference value. As a result, it was found that the hit ratio average of the predictive model constructed using the dependent variable calculated by the 12% standard rate of return was the best at 64.60%. In order to propose an optimal prediction model based on the determined dependent variables and 7 independent variables, we construct a prediction model by applying the five methodologies of discriminant analysis, logistic regression analysis, decision tree, artificial neural network, and genetic algorithm linear model we tried to compare them. To do this, 10 sets of training data and testing data were extracted using 10 fold validation method. After building the model using this data, the hit ratio of each set was averaged and the performance was compared. As a result, the hit ratio average of prediction models constructed by using discriminant analysis, logistic regression model, decision tree, artificial neural network, and genetic algorithm linear model were 64.40%, 65.12%, 63.54%, 67.40%, and 60.51%, respectively. It was confirmed that the model using the artificial neural network is the best. Through this study, it is proved that it is effective to utilize 7 independent variables and artificial neural network prediction model in the future NPL market. The proposed model predicts that the 12% return of new things will be achieved beforehand, which will help the special purpose companies make investment decisions. Furthermore, we anticipate that the NPL market will be liquidated as the transaction proceeds at an appropriate price.

Process Design of Carbon Dioxide Storage in the Marine Geological Structure: I. Comparative Analysis of Thermodynamic Equations of State using Numerical Calculation (이산화탄소 해양지중저장 처리를 위한 공정 설계: I. 수치계산을 통한 열역학 상태방정식의 비교 분석)

  • Huh, Cheol;Kang, Seong-Gil
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.11 no.4
    • /
    • pp.181-190
    • /
    • 2008
  • To response climate change and Kyoto protocol and to reduce greenhouse gas emissions, marine geological storage of $CO_2$ is regarded as one of the most promising option. Marine geological storage of $CO_2$ is to capture $CO_2$ from major point sources(eg. power plant), to transport to the storage sites and to store $CO_2$ into the marine geological structure such as deep sea saline aquifer. To design a reliable $CO_2$ marine geological storage system, it is necessary to perform numerical process simulation using thermodynamic equation of state. The purpose of this paper is to compare and analyse the relevant equations of state including ideal, BWRS, PR, PRBM and SRK equation of state. To evaluate the predictive accuracy of the equation of the state, we compared numerical calculation results with reference experimental data. Ideal and SRK equation of state did not predict the density behavior above $29.85^{\circ}C$, 60 bar. Especially, they showed maximum 100% error in supercritical state. BWRS equation of state did not predict the density behavior between $60{\sim}80\;bar$ and near critical temperature. On the other hand, PR and PRBM equation of state showed good predictive capability in supercritical state. Since the thermodynamic conditions of $CO_2$ reservoir sites correspond to supercritical state(above $31.1^{\circ}C$ and 73.9 bar), we conclude that it is recommended to use PR and PRBM equation of state in designing of $CO_2$ marine geological storage process.

  • PDF

An Alternative Method for a Rapid Urease Test Using Back-table Gastric Mucosal Biopsies from Gastrectomy Specimen for Making the Diagnosis of Helicobacter pylori Infection in Patients with Gastric Cancer (위암 환자의 헬리코박터 파이로리 감염 진단에 있어서 위절제술 직후 생검된 위점막 조직을 이용한 신속 요소 분해 효소 검사법 도입의 의의)

  • Kim, Sin-Ill;Jin, Sung-Ho;Lee, Jae-Hwan;Min, Jae-Seok;Bang, Ho-Yoon;Lee, Jong-Inn
    • Journal of Gastric Cancer
    • /
    • v.9 no.4
    • /
    • pp.172-176
    • /
    • 2009
  • Purpose: The rapid urease test is a rapid and reliable method for diagnosing Helicobacter pylori infection. However it requires gastric mucosal biopsies during endoscopy, and the test is not covered by national health insurance for patients with gastric cancer. So, we introduced an alternative method for a rapid urease test using back-table gastric mucosal biopsies from gastrectomy specimen. Materials and Methods: Ninety gastric cancer patients underwent an anti H. pylori IgG ELISA test and gastrectomy. Just after gastrectomy, two gastric mucosal biopsies from the prepyloric antrum and lower body of the gastrectomy specimen were taken from the back table in the operative room, and these were fixed immediately with the rapid urease test kit, and the color change was monitored for up to 24 hours. In this study, H. pylori infection was defined as positive when the serology or rapid urease test showed positive results. Results: The positive rate of the rapid urease test and serology was 91.1% and 77.8%, respectively. The sensitivity, specificity, positive predictive value and negative predictive value of the rapid urease test and serology were 94.3 and 80.5%, 100 and 100%, 100 and 100%, and 37.5 and 15%, respectively. The accuracy of the rapid urease test was higher than that of serology (94.4 vs. 81.1%, respectively). The rapid urease test showed a higher rate of detecting H. pylori infection than that of serology (McNemar's test, P=0.019). Conclusion: The result of the rapid urease test using back-table gastric mucosal biopsies from a gastrectomy specimen is comparable to the reference data of the conventional rapid urease test using gastric mucosal endoscopic biopsies. Therefore, it can be an alternative diagnostic method for H. pylori infection.

  • PDF