• Title/Summary/Keyword: predictor models

Search Result 177, Processing Time 0.024 seconds

Role and Clinical Importance of Progressive Changes in Echocardiographic Parameters in Predicting Outcomes in Patients With Hypertrophic Cardiomyopathy

  • Kyehwan Kim;Seung Do Lee;Hyo Jin Lee;Hangyul Kim;Hye Ree Kim;Yun Ho Cho;Jeong Yoon Jang;Min Gyu Kang;Jin-Sin Koh;Seok-Jae Hwang;Jin-Yong Hwang;Jeong Rang Park
    • Journal of Cardiovascular Imaging
    • /
    • v.31 no.2
    • /
    • pp.85-95
    • /
    • 2023
  • BACKGROUND: The prognostic utility of follow-up transthoracic echocardiography (FU-TTE) in patients with hypertrophic cardiomyopathy (HCM) is unclear, specifically in terms of whether changes in echocardiographic parameters in routine FU-TTE parameters are associated with cardiovascular outcomes. METHODS: From 2010 to 2017, 162 patients with HCM were retrospectively enrolled in this study. Using echocardiography, HCM was diagnosed based on morphological criteria. Patients with other diseases that cause cardiac hypertrophy were excluded. TTE parameters at baseline and FU were analyzed. FU-TTE was designated as the last recorded value in patients who did not develop any cardiovascular event or the latest exam before event development. Clinical outcomes were acute heart failure, cardiac death, arrhythmia, ischemic stroke, and cardiogenic syncope. RESULTS: Median interval between the baseline TTE and FU-TTE was 3.3 years. Median clinical FU duration was 4.7 years. Septal trans-mitral velocity/mitral annular tissue Doppler velocity (E/e'), tricuspid regurgitation velocity, left ventricular ejection fraction (LVEF), and left atrial volume index (LAVI) at baseline were recorded. LVEF, LAVI, and E/e' values were associated with poor outcomes. However, no delta values predicted HCM-related cardiovascular outcomes. Logistic regression models incorporating changes in TTE parameters had no significant findings. Baseline LAVI was the best predictor of a poor prognosis. In survival analysis, an already enlarged or increased size LAVI was associated with poorer clinical outcomes. CONCLUSIONS: Changes in echocardiographic parameters extracted from TTE did not assist in predicting clinical outcomes. Cross-sectionally evaluated TTE parameters were superior to changes in TTE parameters between baseline and FU at predicting cardiovascular events.

Prediction of Decompensation and Death in Advanced Chronic Liver Disease Using Deep Learning Analysis of Gadoxetic Acid-Enhanced MRI

  • Subin Heo;Seung Soo Lee;So Yeon Kim;Young-Suk Lim;Hyo Jung Park;Jee Seok Yoon;Heung-Il Suk;Yu Sub Sung;Bumwoo Park;Ji Sung Lee
    • Korean Journal of Radiology
    • /
    • v.23 no.12
    • /
    • pp.1269-1280
    • /
    • 2022
  • Objective: This study aimed to evaluate the usefulness of quantitative indices obtained from deep learning analysis of gadoxetic acid-enhanced hepatobiliary phase (HBP) MRI and their longitudinal changes in predicting decompensation and death in patients with advanced chronic liver disease (ACLD). Materials and Methods: We included patients who underwent baseline and 1-year follow-up MRI from a prospective cohort that underwent gadoxetic acid-enhanced MRI for hepatocellular carcinoma surveillance between November 2011 and August 2012 at a tertiary medical center. Baseline liver condition was categorized as non-ACLD, compensated ACLD, and decompensated ACLD. The liver-to-spleen signal intensity ratio (LS-SIR) and liver-to-spleen volume ratio (LS-VR) were automatically measured on the HBP images using a deep learning algorithm, and their percentage changes at the 1-year follow-up (ΔLS-SIR and ΔLS-VR) were calculated. The associations of the MRI indices with hepatic decompensation and a composite endpoint of liver-related death or transplantation were evaluated using a competing risk analysis with multivariable Fine and Gray regression models, including baseline parameters alone and both baseline and follow-up parameters. Results: Our study included 280 patients (153 male; mean age ± standard deviation, 57 ± 7.95 years) with non-ACLD, compensated ACLD, and decompensated ACLD in 32, 186, and 62 patients, respectively. Patients were followed for 11-117 months (median, 104 months). In patients with compensated ACLD, baseline LS-SIR (sub-distribution hazard ratio [sHR], 0.81; p = 0.034) and LS-VR (sHR, 0.71; p = 0.01) were independently associated with hepatic decompensation. The ΔLS-VR (sHR, 0.54; p = 0.002) was predictive of hepatic decompensation after adjusting for baseline variables. ΔLS-VR was an independent predictor of liver-related death or transplantation in patients with compensated ACLD (sHR, 0.46; p = 0.026) and decompensated ACLD (sHR, 0.61; p = 0.023). Conclusion: MRI indices automatically derived from the deep learning analysis of gadoxetic acid-enhanced HBP MRI can be used as prognostic markers in patients with ACLD.

The Effect of E-SERVQUAL on e-Loyalty for Apparel Online Shopping (재망상복장구물중전자(在网上服装购物中电子)E-SERVQUAL 대전자충성도적영향(对电子忠诚度的影响))

  • Kim, Eun-Young;Jackson, Vanessa P.
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.4
    • /
    • pp.57-63
    • /
    • 2009
  • With an exponential increase in electronic commerce (e-commerce), marketers are attempting to gain a competitive advantage by emphasizing service quality and post interaction service aspects, which leads to customer satisfaction or behavioral consequence. Particularly for apparel, service quality is one of the key determinants in encouraging customer e-loyalty, and hence the success of apparel retailing in the context of electronic commerce. Therefore, this study explores e-service quality (E-SERVQUAL) factors and their unique effects on e-loyalty for apparel online shopping based on Parasuraman et al' s (2005) framework. Specific objectives of this study are to identify underlying dimension of E-SERVQUAL, and analyze a structural model for examining the effect of E-SERVQUAL on e-loyalty for online apparel shopping. For the theoretical framework of service quality in the context of online shopping, literatures on traditional and electronic service quality factors were comparatively reviewed, and two aspects of core and recovery services were identified. This study hypothesized that E-SERVQUAL has an effect on e-loyalty; customer satisfaction has a positive effect on e-service loyalty for apparel online shopping; and customer satisfaction mediates in the effect of E-SERVQUAL on e-loyalty for apparel online shopping. A self-administered questionnaire was developed based on literatures. A total of 252 usable questionnaires were obtained from online consumers who had purchase experience with online shopping for apparel products and reside in standard metropolitan areas, in the United States. Factor analysis (e.g., exploratory, confirmatory) was conducted to assess the validity and reliability and the structural equation model including measurement and structural models was estimated via LISREL 8.8 program. Findings showed that the E-SERVQUAL of shopping websites for apparel consisted of five factors: Compensation, Fulfillment, Efficiency, System Availability, and Responsiveness. This supports Parasuraman (2005)'s E-S-QUAL encompassing two aspects of core service (e.g., fulfillment, efficiency, system availability) and recovery related service (e.g., compensation, responsiveness) in the context of apparel shopping online. In the structural equation model, there are five exogenous latent variables for e-SERVQUAL factors; and two endogenous latent variables (e.g., customer satisfaction, e-loyalty). For the measurement model, the factor loadings for each respective construct were statistically significant and were greater than .60 and internal consistency reliabilities ranged from .85 to .88. In the estimated structural model of the e-SERVEQUAL factors, the system availability was found to have direct and positive effect on e-loyalty, whereas efficiency had a negative effect on e-loyalty for apparel online shopping. However, fulfillment was not a significant predictor for explaining consequences of E-SERVQUAL for apparel online shopping. This finding implies that perceived service quality of system available was likely to increase customer satisfaction for apparel online shopping. However, it was not supported that e-loyalty was determined by service quality, because service quality has an indirect effect on e-loyalty (i.e., repurchase intention) by mediating effect of value or satisfaction in the context of online shopping for apparel. In addition, both compensation and responsiveness were found to have a significant impact on customer satisfaction, which influenced e-loyalty for apparel online shopping. Thus, there was significant indirect effect of compensation and responsiveness on e-loyalty. This suggests that the recovery-specific service factors play an important role in maximizing customer satisfaction levels and then maintaining customer loyalty to the online shopping site for apparel. The findings have both managerial and research implications. Fashion marketers can establish long-term relationship with their customers based on continuously measuring customer perceptions for recovery-related service quality, such as quick responses to problem and returns, and compensation for customers' problem after their purchases. In order to maintain e-loyalty, recovery services play an important role in the first choice websites for consumers to purchase clothing. Given that online consumers may shop anywhere, a marketing strategy for improving competitive advantages is to provide better service quality, maximize satisfaction, and turn to creating customers' e-loyalty for apparel online shopping. From a researcher's perspective, there are some limitations of this research that should be considered when interpreting its findings. For future research, findings provide a basis for the further study of this important topic along both theoretical and empirical dimensions. Based on the findings, more comprehensive models for predicting E-SERVQUAL's consequences can be developed and tested. For global fashion marketing, this study can expand to a cross-cultural approach into e-service quality for apparel by including multinational samples.

  • PDF

The Relationship between the Cognitive Impairment and Mortality in the Rural Elderly (농촌지역 노인들의 인지기능 장애와 사망과의 관련성)

  • Sun, Byeong-Hwan;Park, Kyeong-Soo;Na, Baeg-Ju;Park, Yo-Seop;Nam, Hae-Sung;Shin, Jun-Ho;Sohn, Seok-Joon;Rhee, Jung-Ae
    • Journal of Preventive Medicine and Public Health
    • /
    • v.30 no.3 s.58
    • /
    • pp.630-642
    • /
    • 1997
  • The purpose of this study was to examine the mortality risk associated with cognitive impairment among the rural elderly. The subjective of study was 558 of 'A Study on the Depression and Cognitive Impairment in the Rural Elderly' of Jung Ae Rhee and Hyang Gyun Jung's study(1993). Cognitive impairment and other social and health factors were assessed in 558 elderly rural community residents. For this study, a Korean version of the Mini-Mental State Examination(MMSEK) was used as a global indicator of cognitive functioning. And mortality risk factors for each cognitive impairment subgroup were identified by univariate and multivariate Cox regression analysis. At baseline 22.6% of the sample were mildly impaired and 14.2% were severely impaired. As the age increased, the cognitive function was more impaired. Sexual difference was existed in the cognitive function level. Also the variables such as smoking habits, physical disorders had the significant relationship with cognitive function impairment. Across a 3-year observation period the mortality rate was 8.5% for the cognitively unimpaired, 11.1% for the mildly impaired, and 16.5% for the severly impaired respendents. And the survival probability was .92 for the cognitively unimpaired, .90 for the mildly impaired, and .86 for the severly impaired respondents. Compared to survival curve for the cognitively unimpaired group, each survival curve for the mildly and the severely impaired group was not significantly different. When adjustments models were not made for the effects of other health and social covariates, each hazard ratio of death of mildly and severely impaired persons was not significantly different as compared with the cognitively unimpaired. But, as MMSEK score increased, significantly hazard ratio of death decreased. Employing Cox univariate proportional hazards model, statistically other significant variables were age, monthly income, smoking habits, physical disorders. Also when adjustments were made for the effects of other health and social covariates, there was no difference in hazard ratio of death between those with severe or mild impairment and unimpaired persons. And as MMSEK score increased, significantly hazard ratio of death did not decrease. Employing Cox multivariate proportional hazards model, statistically other significant variables were age, monthly income, physical disorders. Employing Cox multivariate proportional hazards model by sex, at men and women statistically significant variable was only age. For both men and women, also cognitive impairment was not a significant risk factor. Other investigators have found that cognitive impairment is a significant predictor of mortality. But we didn't find that it is a significant predictor of mortality. Even though the conclusions of our study were not related to cognitive impairment and mortality, early detection of impaired cognition and attention to associated health problems could improve the quality of life of these older adults and perhaps extend their survival.

  • PDF

Product Recommender Systems using Multi-Model Ensemble Techniques (다중모형조합기법을 이용한 상품추천시스템)

  • Lee, Yeonjeong;Kim, Kyoung-Jae
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.39-54
    • /
    • 2013
  • Recent explosive increase of electronic commerce provides many advantageous purchase opportunities to customers. In this situation, customers who do not have enough knowledge about their purchases, may accept product recommendations. Product recommender systems automatically reflect user's preference and provide recommendation list to the users. Thus, product recommender system in online shopping store has been known as one of the most popular tools for one-to-one marketing. However, recommender systems which do not properly reflect user's preference cause user's disappointment and waste of time. In this study, we propose a novel recommender system which uses data mining and multi-model ensemble techniques to enhance the recommendation performance through reflecting the precise user's preference. The research data is collected from the real-world online shopping store, which deals products from famous art galleries and museums in Korea. The data initially contain 5759 transaction data, but finally remain 3167 transaction data after deletion of null data. In this study, we transform the categorical variables into dummy variables and exclude outlier data. The proposed model consists of two steps. The first step predicts customers who have high likelihood to purchase products in the online shopping store. In this step, we first use logistic regression, decision trees, and artificial neural networks to predict customers who have high likelihood to purchase products in each product group. We perform above data mining techniques using SAS E-Miner software. In this study, we partition datasets into two sets as modeling and validation sets for the logistic regression and decision trees. We also partition datasets into three sets as training, test, and validation sets for the artificial neural network model. The validation dataset is equal for the all experiments. Then we composite the results of each predictor using the multi-model ensemble techniques such as bagging and bumping. Bagging is the abbreviation of "Bootstrap Aggregation" and it composite outputs from several machine learning techniques for raising the performance and stability of prediction or classification. This technique is special form of the averaging method. Bumping is the abbreviation of "Bootstrap Umbrella of Model Parameter," and it only considers the model which has the lowest error value. The results show that bumping outperforms bagging and the other predictors except for "Poster" product group. For the "Poster" product group, artificial neural network model performs better than the other models. In the second step, we use the market basket analysis to extract association rules for co-purchased products. We can extract thirty one association rules according to values of Lift, Support, and Confidence measure. We set the minimum transaction frequency to support associations as 5%, maximum number of items in an association as 4, and minimum confidence for rule generation as 10%. This study also excludes the extracted association rules below 1 of lift value. We finally get fifteen association rules by excluding duplicate rules. Among the fifteen association rules, eleven rules contain association between products in "Office Supplies" product group, one rules include the association between "Office Supplies" and "Fashion" product groups, and other three rules contain association between "Office Supplies" and "Home Decoration" product groups. Finally, the proposed product recommender systems provides list of recommendations to the proper customers. We test the usability of the proposed system by using prototype and real-world transaction and profile data. For this end, we construct the prototype system by using the ASP, Java Script and Microsoft Access. In addition, we survey about user satisfaction for the recommended product list from the proposed system and the randomly selected product lists. The participants for the survey are 173 persons who use MSN Messenger, Daum Caf$\acute{e}$, and P2P services. We evaluate the user satisfaction using five-scale Likert measure. This study also performs "Paired Sample T-test" for the results of the survey. The results show that the proposed model outperforms the random selection model with 1% statistical significance level. It means that the users satisfied the recommended product list significantly. The results also show that the proposed system may be useful in real-world online shopping store.

Classification Algorithm-based Prediction Performance of Order Imbalance Information on Short-Term Stock Price (분류 알고리즘 기반 주문 불균형 정보의 단기 주가 예측 성과)

  • Kim, S.W.
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.157-177
    • /
    • 2022
  • Investors are trading stocks by keeping a close watch on the order information submitted by domestic and foreign investors in real time through Limit Order Book information, so-called price current provided by securities firms. Will order information released in the Limit Order Book be useful in stock price prediction? This study analyzes whether it is significant as a predictor of future stock price up or down when order imbalances appear as investors' buying and selling orders are concentrated to one side during intra-day trading time. Using classification algorithms, this study improved the prediction accuracy of the order imbalance information on the short-term price up and down trend, that is the closing price up and down of the day. Day trading strategies are proposed using the predicted price trends of the classification algorithms and the trading performances are analyzed through empirical analysis. The 5-minute KOSPI200 Index Futures data were analyzed for 4,564 days from January 19, 2004 to June 30, 2022. The results of the empirical analysis are as follows. First, order imbalance information has a significant impact on the current stock prices. Second, the order imbalance information observed in the early morning has a significant forecasting power on the price trends from the early morning to the market closing time. Third, the Support Vector Machines algorithm showed the highest prediction accuracy on the day's closing price trends using the order imbalance information at 54.1%. Fourth, the order imbalance information measured at an early time of day had higher prediction accuracy than the order imbalance information measured at a later time of day. Fifth, the trading performances of the day trading strategies using the prediction results of the classification algorithms on the price up and down trends were higher than that of the benchmark trading strategy. Sixth, except for the K-Nearest Neighbor algorithm, all investment performances using the classification algorithms showed average higher total profits than that of the benchmark strategy. Seventh, the trading performances using the predictive results of the Logical Regression, Random Forest, Support Vector Machines, and XGBoost algorithms showed higher results than the benchmark strategy in the Sharpe Ratio, which evaluates both profitability and risk. This study has an academic difference from existing studies in that it documented the economic value of the total buy & sell order volume information among the Limit Order Book information. The empirical results of this study are also valuable to the market participants from a trading perspective. In future studies, it is necessary to improve the performance of the trading strategy using more accurate price prediction results by expanding to deep learning models which are actively being studied for predicting stock prices recently.

Tokamak plasma disruption precursor onset time study based on semi-supervised anomaly detection

  • X.K. Ai;W. Zheng;M. Zhang;D.L. Chen;C.S. Shen;B.H. Guo;B.J. Xiao;Y. Zhong;N.C. Wang;Z.J. Yang;Z.P. Chen;Z.Y. Chen;Y.H. Ding;Y. Pan
    • Nuclear Engineering and Technology
    • /
    • v.56 no.4
    • /
    • pp.1501-1512
    • /
    • 2024
  • Plasma disruption in tokamak experiments is a challenging issue that causes damage to the device. Reliable prediction methods are needed, but the lack of full understanding of plasma disruption limits the effectiveness of physics-driven methods. Data-driven methods based on supervised learning are commonly used, and they rely on labelled training data. However, manual labelling of disruption precursors is a time-consuming and challenging task, as some precursors are difficult to accurately identify. The mainstream labelling methods assume that the precursor onset occurs at a fixed time before disruption, which leads to mislabeled samples and suboptimal prediction performance. In this paper, we present disruption prediction methods based on anomaly detection to address these issues, demonstrating good prediction performance on J-TEXT and EAST. By evaluating precursor onset times using different anomaly detection algorithms, it is found that labelling methods can be improved since the onset times of different shots are not necessarily the same. The study optimizes precursor labelling using the onset times inferred by the anomaly detection predictor and test the optimized labels on supervised learning disruption predictors. The results on J-TEXT and EAST show that the models trained on the optimized labels outperform those trained on fixed onset time labels.