• Title/Summary/Keyword: predictive information

Search Result 1,200, Processing Time 0.033 seconds

1-month Prediction on Rice Harvest Date in South Korea Based on Dynamically Downscaled Temperature (역학적 규모축소 기온을 이용한 남한지역 벼 수확일 1개월 예측)

  • Jina Hur;Eun-Soon Im;Subin Ha;Yong-Seok Kim;Eung-Sup Kim;Joonlee Lee;Sera Jo;Kyo-Moon Shim;Min-Gu Kang
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.267-275
    • /
    • 2023
  • This study predicted rice harvest date in South Korea using 11-year (2012-2022) hindcasts based on dynamically downscaled 2m air temperature at subseasonal (1-month lead) timescale. To obtain high (5 km) resolution meteorological information over South Korea, global prediction obtained from the NOAA Climate Forecast System (CFSv2) is dynamically downscaled using the Weather Research and Forecasting (WRF) double-nested modeling system. To estimate rice harvest date, the growing degree days (GDD) is used, which accumulated the daily temperature from the seeding date (1 Jan.) to the reference temperature (1400℃ + 55 days) for harvest. In terms of the maximum (minimum) temperatures, the hindcasts tends to have a cold bias of about 1. 2℃ (0. 1℃) for the rice growth period (May to October) compared to the observation. The harvest date derived from hindcasts (DOY 289) well simulates one from observation (DOY 280), despite a margin of 9 days. The study shows the possibility of obtaining the detailed predictive information for rice harvest date over South Korea based on the dynamical downscaling method.

A Study on People Counting in Public Metro Service using Hybrid CNN-LSTM Algorithm (Hybrid CNN-LSTM 알고리즘을 활용한 도시철도 내 피플 카운팅 연구)

  • Choi, Ji-Hye;Kim, Min-Seung;Lee, Chan-Ho;Choi, Jung-Hwan;Lee, Jeong-Hee;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.131-145
    • /
    • 2020
  • In line with the trend of industrial innovation, IoT technology utilized in a variety of fields is emerging as a key element in creation of new business models and the provision of user-friendly services through the combination of big data. The accumulated data from devices with the Internet-of-Things (IoT) is being used in many ways to build a convenience-based smart system as it can provide customized intelligent systems through user environment and pattern analysis. Recently, it has been applied to innovation in the public domain and has been using it for smart city and smart transportation, such as solving traffic and crime problems using CCTV. In particular, it is necessary to comprehensively consider the easiness of securing real-time service data and the stability of security when planning underground services or establishing movement amount control information system to enhance citizens' or commuters' convenience in circumstances with the congestion of public transportation such as subways, urban railways, etc. However, previous studies that utilize image data have limitations in reducing the performance of object detection under private issue and abnormal conditions. The IoT device-based sensor data used in this study is free from private issue because it does not require identification for individuals, and can be effectively utilized to build intelligent public services for unspecified people. Especially, sensor data stored by the IoT device need not be identified to an individual, and can be effectively utilized for constructing intelligent public services for many and unspecified people as data free form private issue. We utilize the IoT-based infrared sensor devices for an intelligent pedestrian tracking system in metro service which many people use on a daily basis and temperature data measured by sensors are therein transmitted in real time. The experimental environment for collecting data detected in real time from sensors was established for the equally-spaced midpoints of 4×4 upper parts in the ceiling of subway entrances where the actual movement amount of passengers is high, and it measured the temperature change for objects entering and leaving the detection spots. The measured data have gone through a preprocessing in which the reference values for 16 different areas are set and the difference values between the temperatures in 16 distinct areas and their reference values per unit of time are calculated. This corresponds to the methodology that maximizes movement within the detection area. In addition, the size of the data was increased by 10 times in order to more sensitively reflect the difference in temperature by area. For example, if the temperature data collected from the sensor at a given time were 28.5℃, the data analysis was conducted by changing the value to 285. As above, the data collected from sensors have the characteristics of time series data and image data with 4×4 resolution. Reflecting the characteristics of the measured, preprocessed data, we finally propose a hybrid algorithm that combines CNN in superior performance for image classification and LSTM, especially suitable for analyzing time series data, as referred to CNN-LSTM (Convolutional Neural Network-Long Short Term Memory). In the study, the CNN-LSTM algorithm is used to predict the number of passing persons in one of 4×4 detection areas. We verified the validation of the proposed model by taking performance comparison with other artificial intelligence algorithms such as Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM) and RNN-LSTM (Recurrent Neural Network-Long Short Term Memory). As a result of the experiment, proposed CNN-LSTM hybrid model compared to MLP, LSTM and RNN-LSTM has the best predictive performance. By utilizing the proposed devices and models, it is expected various metro services will be provided with no illegal issue about the personal information such as real-time monitoring of public transport facilities and emergency situation response services on the basis of congestion. However, the data have been collected by selecting one side of the entrances as the subject of analysis, and the data collected for a short period of time have been applied to the prediction. There exists the limitation that the verification of application in other environments needs to be carried out. In the future, it is expected that more reliability will be provided for the proposed model if experimental data is sufficiently collected in various environments or if learning data is further configured by measuring data in other sensors.

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Long-term Prognostic Value of Dipyridamole Stress Myocardial SPECT (디피리다몰 부하 심근관류 SPECT의 장기예후 예측능)

  • Lee, Dong-Soo;Cheon, Gi-Jeong;Jang, Myung-Jin;Kang, Won-Jun;Chung, June-Key;Lee, Myoung-Mook;Lee, Myung-Chul;Kang, Wee-Chang;Lee, Young-Jo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.34 no.1
    • /
    • pp.39-54
    • /
    • 2000
  • Purpose: Dipyridamole stress myocardial perfusion SPECT could predict prognosis, however, long-term follow-up showed change of hazard ratio in patients with suspected coronary artery disease. We investigated how long normal SPECT could predict the benign prognosis on the long-term follow-up. Materials and Methods: We followed up 1169 patients and divided these patients into groups in whom coronary angiography were performed and were not. Total cardiac event rate and hard event rate were predicted using clinical, angiographic and SPECT findings. Predictive values of normal and abnormal SPECT were examined using survival analysis with Mantel-Haenszel method, multivariate Cox proportional hazard model analysis and newly developed statistical method to test time-invariance of hazard rate and changing point of this rate. Results: Reversible perfusion decrease on myocardial perfusion SPECT predicted higher total cardiac event rate independently and further to angiographic findings. However, myocardial SPECT showed independent but not incremental prognostic values for hard event rate. Hazard ratio of normal perfusion SPECT was changed significantly (p<0.001) and the changing point of hazard rate was 4.4 years of follows up. However, the ratio of abnormal SPECT was not. Conclusion: Dipyridamole stress myocardial perfusion SPECT provided independent prognostic information in patients with known and suspected coronary artery disease. Normal perfusion SPECT predicted least event rate for 4.4 years.

  • PDF

NIRS AS AN ESSENTIAL TOOL IN FOOD SAFETY PROGRAMS: FEED INGREDIENTS PREDICTION H COMMERCIAL COMPOUND FEEDING STUFFS

  • Varo, Ana-Garrido;MariaDoloresPerezMarin;Cabrera, Augusto-Gomez;JoseEmilioGuerrero Ginel;FelixdePaz;NatividadDelgado
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1153-1153
    • /
    • 2001
  • Directive 79/373/EEC on the marketing of compound feeding stuffs, provided far a flexible declaration arrangement confined to the indication of the feed materials without stating their quantity and the possibility was retained to declare categories of feed materials instead of declaring the feed materials themselves. However, the BSE (Bovine Spongiform Encephalopathy) and the dioxin crisis have demonstrated the inadequacy of the current provisions and the need of detailed qualitative and quantitative information. On 10 January 2000 the Commission submitted to the Council a proposal for a Directive related to the marketing of compound feeding stuffs and the Council adopted a Common Position (EC N$^{\circ}$/2001) published at the Official Journal of the European Communities of 2. 2. 2001. According to the EC (EC N$^{\circ}$ 6/2001) the feeds material contained in compound feeding stufs intended for animals other than pets must be declared according to their percentage by weight, by descending order of weight and within the following brackets (I :< 30%; II :> 15 to 30%; III :> 5 to 15%; IV : 2% to 5%; V: < 2%). For practical reasons, it shall be allowed that the declarations of feed materials included in the compound feeding stuffs are provided on an ad hoc label or accompanying document. However, documents alone will not be sufficient to restore public confidence on the animal feed industry. The objective of the present work is to obtain calibration equations fur the instanteneous and simultaneous prediction of the chemical composition and the percentage of ingredients of unground compound feeding stuffs. A total of 287 samples of unground compound feeds marketed in Spain were scanned in a FOSS-NIR Systems 6500 monochromator using a rectangular cup with a quartz window (16 $\times$ 3.5 cm). Calibration equations were obtained for the prediction of moisture ($R^2$= 0.84, SECV = 0.54), crude protein ($R^2$= 0.96, SECV = 0.75), fat ($R^2$= 0.86, SECV = 0.54), crude fiber ($R^2$= 0.97, SECV = 0.63) and ashes ($R^2$= 0.86, SECV = 0.83). The sane set of spectroscopic data was used to predict the ingredient composition of the compound feeds. The preliminary results show that NIRS has an excellent ability ($r^2$$\geq$ 0, 9; RPD $\geq$ 3) for the prediction of the percentage of inclusion of alfalfa, sunflower meal, gluten meal, sugar beet pulp, palm meal, poultry meal, total meat meal (meat and bone meal and poultry meal) and whey. Other equations with a good predictive performance ($R^2$$\geq$0, 7; 2$\leq$RPD$\leq$3) were the obtained for the prediction of soya bean meal, corn, molasses, animal fat and lupin meal. The equations obtained for the prediction of other constituents (barley, bran, rice, manioc, meat and bone meal, fish meal, calcium carbonate, ammonium clorure and salt have an accuracy enough to fulfill the requirements layed down by the Common Position (EC Nº 6/2001). NIRS technology should be considered as an essential tool in food Safety Programs.

  • PDF

EEPERF(Experiential Education PERFormance): An Instrument for Measuring Service Quality in Experiential Education (체험형 교육 서비스 품질 측정 항목에 관한 연구: 창의적 체험활동을 중심으로)

  • Park, Ky-Yoon;Kim, Hyun-Sik
    • Journal of Distribution Science
    • /
    • v.10 no.2
    • /
    • pp.43-52
    • /
    • 2012
  • As experiential education services are growing, the need for proper management is increasing. Considering that adequate measures are an essential factor for achieving success in managing something, it is important for managers to use a proper system of metrics to measure the performance of experiential education services. However, in spite of this need, little research has been done to develop a valid and reliable set of metrics for assessing the quality of experiential education services. The current study aims to develop a multi-item instrument for assessing the service quality of experiential education. The specific procedure is as follows. First, we generated a pool of possible metrics based on diverse literature on service quality. We elicited possiblemetric items not only from general service quality metrics such as SERVQUAL and SERVPERF but also from educational service quality metrics such as HEdPERF and PESPERF. Second, specialist teachers in the experiential education area screened the initial metrics to boost face validity. Third, we proceeded with multiple rounds of empirical validation of those metrics. Based on this processes, we refined the metrics to determine the final metrics to be used. Fourth, we examined predictive validity by checking the well-established positive relationship between each dimension of metrics and customer satisfaction. In sum, starting with the initial pool of scale items elicited from the previous literature and purifying them empirically through the surveying method, we developed a four-dimensional systemized scale to measure the superiority of experiential education and named it "Experiential Education PERFormance" (EEPERF). Our findings indicate that students (consumers) perceive the superiority of the experiential education (EE) service in the following four dimensions: EE-empathy, EE-reliability, EE-outcome, and EE-landscape. EE-empathy is a judgment in response to the question, "How empathetically does the experiential educational service provider interact with me?" Principal measures are "How well does the service provider understand my needs?," and "How well does the service provider listen to my voice?" Next, EE-reliability is a judgment in response to the question, "How reliably does the experiential educational service provider interact with me?" Major measures are "How reliable is the schedule here?," and "How credible is the service provider?" EE-outcome is a judgmentin response to the question, "What results could I get from this experiential educational service encounter?" Representative measures are "How good is the information that I will acquire form this service encounter?," and "How useful is this service encounter in helping me develop creativity?" Finally, EE-landscape is a judgment about the physical environment. Essential measures are "How convenient is the access to the service encounter?,"and "How well managed are the facilities?" We showed the reliability and validity of the system of metrics. All four dimensions influence customer satisfaction significantly. Practitioners may use the results in planning experiential educational service programs and evaluating each service encounter. The current study isexpected to act as a stepping-stone for future scale improvement. In this case, researchers may use the experience quality paradigm that has recently arisen.

  • PDF

Switching and Leakage-Power Suppressed SRAM for Leakage-Dominant Deep-Submicron CMOS Technologies (초미세 CMOS 공정에서의 스위칭 및 누설전력 억제 SRAM 설계)

  • Choi Hoon-Dae;Min Kyeong-Sik
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.43 no.3 s.345
    • /
    • pp.21-32
    • /
    • 2006
  • A new SRAM circuit with row-by-row activation and low-swing write schemes is proposed to reduce switching power of active cells as well as leakage one of sleep cells in this paper. By driving source line of sleep cells by $V_{SSH}$ which is higher than $V_{SS}$, the leakage current can be reduced to 1/100 due to the cooperation of the reverse body-bias. Drain Induced Barrier Lowering (DIBL), and negative $V_{GS}$ effects. Moreover, the bit line leakage which may introduce a fault during the read operation can be eliminated in this new SRAM. Swing voltage on highly capacitive bit lines is reduced to $V_{DD}-to-V_{SSH}$ from the conventional $V_{DD}-to-V_{SS}$ during the write operation, greatly saving the bit line switching power. Combining the row-by-row activation scheme with the low-swing write does not require the additional area penalty. By the SPICE simulation with the Berkeley Predictive Technology Modes, 93% of leakage power and 43% of switching one are estimated to be saved in future leakage-dominant 70-un process. A test chip has been fabricated using $0.35-{\mu}m$ CMOS process to verify the effectiveness and feasibility of the new SRAM, where the switching power is measured to be 30% less than the conventional SRAM when the I/O bit width is only 8. The stored data is confirmed to be retained without loss until the retention voltage is reduced to 1.1V which is mainly due to the metal shield. The switching power will be expected to be more significant with increasing the I/O bit width.

Serum Tumor Marker Levels might have Little Significance in Evaluating Neoadjuvant Treatment Response in Locally Advanced Breast Cancer

  • Wang, Yu-Jie;Huang, Xiao-Yan;Mo, Miao;Li, Jian-Wei;Jia, Xiao-Qing;Shao, Zhi-Min;Shen, Zhen-Zhou;Wu, Jiong;Liu, Guang-Yu
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.16 no.11
    • /
    • pp.4603-4608
    • /
    • 2015
  • Background: To determine the potential value of serum tumor markers in predicting pCR (pathological complete response) during neoadjuvant chemotherapy. Materials and Methods: We retrospectively monitored the pro-, mid-, and post-neoadjuvant treatment serum tumor marker concentrations in patients with locally advanced breast cancer (stage II-III) who accepted pre-surgical chemotherapy or chemotherapy in combination with targeted therapy at Fudan University Shanghai Cancer Center between September 2011 and January 2014 and investigated the association of serum tumor marker levels with therapeutic effect. Core needle biopsy samples were assessed using immunohistochemistry (IHC) prior to neoadjuvant treatment to determine hormone receptor, human epidermal growth factor receptor 2(HER2), and proliferation index Ki67 values. In our study, therapeutic response was evaluated by pCR, defined as the disappearance of all invasive cancer cells from excised tissue (including primary lesion and axillary lymph nodes) after completion of chemotherapy. Analysis of variance of repeated measures and receiver operating characteristic (ROC) curves were employed for statistical analysis of the data. Results: A total of 348 patients were recruited in our study after excluding patients with incomplete clinical information. Of these, 106 patients were observed to have acquired pCR status after treatment completion, accounting for approximately 30.5% of study individuals. In addition, 147patients were determined to be Her-2 positive, among whom the pCR rate was 45.6% (69 patients). General linear model analysis (repeated measures analysis of variance) showed that the concentration of cancer antigen (CA) 15-3 increased after neoadjuvant chemotherapy in both pCR and non-pCR groups, and that there were significant differences between the two groups (P=0.008). The areas under the ROC curves (AUCs) of pre-, mid-, and post-treatment CA15-3 concentrations demonstrated low-level predictive value (AUC=0.594, 0.644, 0.621, respectively). No significant differences in carcinoembryonic antigen (CEA) or CA12-5 serum levels were observed between the pCR and non-pCR groups (P=0.196 and 0.693, respectively). No efficient AUC of CEA or CA12-5 concentrations were observed to predict patient response toward neoadjuvant treatment (both less than 0.7), nor were differences between the two groups observed at different time points. We then analyzed the Her-2 positive subset of our cohort. Significant differences in CEA concentrations were identified between the pCR and non-pCR groups (P=0.039), but not in CA15-3 or CA12-5 levels (p=0.092 and 0.89, respectively). None of the ROC curves showed underlying prognostic value, as the AUCs of these three markers were less than 0.7. The ROC-AUCs for the CA12-5 concentrations of inter-and post-neoadjuvant chemotherapy in the estrogen receptor negative HER2 positive subgroup were 0.735 and 0.767, respectively. However, the specificity and sensitivity values were at odds with each other which meant that improving either the sensitivity or specificity would impair the efficiency of the other. Conclusions: Serum tumor markers CA15-3, CA12-5, and CEA might have little clinical significance in predicting neoadjuvant treatment response in locally advanced breast cancer.

Assessment of Two Clinical Prediction Models for a Pulmonary Embolism in Patients with a Suspected Pulmonary Embolism (폐색전증이 의심된 환자에서 두 가지 폐색전증 진단 예측 모형의 평가)

  • Park, Jae Seok;Choi, Won-Il;Min, Bo Ram;Park, Jie Hae;Chae, Jin Nyeong;Jeon, Young June;Yu, Ho Jung;Kim, Ji-Young;Kim, Gyoung-Ju;Ko, Sung-Min
    • Tuberculosis and Respiratory Diseases
    • /
    • v.64 no.4
    • /
    • pp.266-271
    • /
    • 2008
  • Background: Estimation of the probability of a patient having an acute pulmonary embolism (PE) for patients with a suspected PE are well established in North America and Europe. However, an assessment of the prediction rules for a PE has not been clearly defined in Korea. The aim of this study is to assess the prediction rules for patients with a suspected PE in Korea. Methods: We performed a retrospective study of 210 inpatients or patients that visited the emergency ward with a suspected PE where computed tomography pulmonary angiography was performed at a single institution between January 2005 and March 2007. Simplified Wells rules and revised Geneva rules were used to estimate the clinical probability of a PE based on information from medical records. Results: Of the 210 patients with a suspected PE, 49 (19.5%) patients had an actual diagnosis of a PE. The proportion of patients classified by Wells rules and the Geneva rules had a low probability of 1% and 21%, an intermediate probability of 62.5% and 76.2%, and a high probability of 33.8% and 2.8%, respectively. The prevalence of PE patients with a low, intermediate and high probability categorized by the Wells rules and Geneva rules was 100% and 4.5% in the low range, 18.2% and 22.5% in the intermediate range, and 19.7% and 50% in the high range, respectively. Receiver operating characteristic curve analysis showed that the revised Geneva rules had a higher accuracy than the Wells rules in terms of detecting PE. Concordance between the two prediction rules was poor ($\kappa$ coefficient=0.06). Conclusion: In the present study, the two prediction rules had a different predictive accuracy for pulmonary embolisms. Applying the revised Geneva rules to inpatients and emergency ward patients suspected of having PE may allow a more effective diagnostic process than the use of the Wells rules.