• Title/Summary/Keyword: Predictive Risk Model

Search Result 220, Processing Time 0.032 seconds

Vision-based Predictive Model on Particulates via Deep Learning

  • Kim, SungHwan;Kim, Songi
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.5
    • /
    • pp.2107-2115
    • /
    • 2018
  • Over recent years, high-concentration of particulate matters (e.g., a.k.a. fine dust) in South Korea has increasingly evoked considerable concerns about public health. It is intractable to track and report $PM_{10}$ measurements to the public on a real-time basis. Even worse, such records merely amount to averaged particulate concentration at particular regions. Under this circumstance, people are prone to being at risk at rapidly dispersing air pollution. To address this challenge, we attempt to build a predictive model via deep learning to the concentration of particulates ($PM_{10}$). The proposed method learns a binary decision rule on the basis of video sequences to predict whether the level of particulates ($PM_{10}$) in real time is harmful (>$80{\mu}g/m^3$) or not. To our best knowledge, no vision-based $PM_{10}$ measurement method has been proposed in atmosphere research. In experimental studies, the proposed model is found to outperform other existing algorithms in virtue of convolutional deep learning networks. In this regard, we suppose this vision based-predictive model has lucrative potentials to handle with upcoming challenges related to particulate measurement.

Analysis of SEER Glassy Cell Carcinoma Data: Underuse of Radiotherapy and Predicators of Cause Specific Survival

  • Cheung, Rex
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.17 no.1
    • /
    • pp.353-356
    • /
    • 2016
  • Background: This study used receiver operating characteristic curve to analyze Surveillance, Epidemiology and End Results (SEER) for glassy cell carcinoma data to identify predictive models and potential disparities in outcome. Materials and Methods: This study analyzed socio-economic, staging and treatment factors. For risk modeling, each factor was fitted by a generalized linear model to predict the cause specific survival. Area under the receiver operating characteristic curves (ROCs) were computed. Similar strata were combined to construct the most parsimonious models. A random sampling algorithm was used to estimate modeling errors. Risk of glassy cell carcinoma death was computed for the predictors for comparison. Results: There were 79 patients included in this study. The mean follow up time (S.D.) was 37 (32.8) months. Female patients outnumbered males 4:1. The mean (S.D.) age was 54.4 (19.8) years. SEER stage was the most predictive factor of outcome (ROC area of 0.69). The risks of cause specific death were, respectively, 9.4% for localized, 16.7% for regional, 35% for the un-staged/others category, and 60% for distant disease. After optimization, separation between the regional and unstaged/others category was removed with a higher ROC area of 0.72. Several socio-economic factors had small but measurable effects on outcome. Radiotherapy had not been used in 90% of patients with regional disease. Conclusions: Optimized SEER stage was predictive and useful in treatment selection. Underuse of radiotherapy may have contributed to poor outcome.

THE DEVELOPMENT OF AN OBESITY INDEX MODEL AS A COMPLEMENT TO BMI FOR ADULT: USING THE BLOOD DATA OF KNHANES

  • Ko, Kwanghee;Oh, Chunyoung
    • Honam Mathematical Journal
    • /
    • v.43 no.4
    • /
    • pp.717-739
    • /
    • 2021
  • We used blood data to predict obesity by complementing the BMI risk, because some blood factors are significantly associated with obesity. For the sampling method, a two-step stratified colony sampling method was used based on sixteen blood factors collected by the Korea National Health and Nutrition Examination Survey(KNHANES). We identify the number of effective blood data of obesity in the final model as 6 ~ 8 factors that differ somewhat depending on age and gender. Also, the coefficient of determination that represents the predictive power of obesity in the regression model is the highest for both men and women of aged 19 and in their 20s and 30s, and the predictive power decreases with increasing age.

Effect of gemigliptin on cardiac ischemia/reperfusion and spontaneous hypertensive rat models

  • Nam, Dae-Hwan;Park, Jinsook;Park, Sun-Hyun;Kim, Ki-Suk;Baek, Eun Bok
    • The Korean Journal of Physiology and Pharmacology
    • /
    • v.23 no.5
    • /
    • pp.329-334
    • /
    • 2019
  • Diabetes is associated with an increased risk of cardiovascular complications. Dipeptidyl peptidase-4 (DPP-IV) inhibitors are used clinically to reduce high blood glucose levels as an antidiabetic agent. However, the effect of the DPP-IV inhibitor gemigliptin on ischemia/reperfusion (I/R)-induced myocardial injury and hypertension is unknown. In this study, we assessed the effects and mechanisms of gemigliptin in rat models of myocardial I/R injury and spontaneous hypertension. Gemigliptin (20 and 100 mg/kg/d) or vehicle was administered intragastrically to Sprague-Dawley rats for 4 weeks before induction of I/R injury. Gemigliptin exerted a preventive effect on I/R injury by improving hemodynamic function and reducing infarct size compared to the vehicle control group. Moreover, administration of gemigliptin (0.03% and 0.15%) powder in food for 4 weeks reversed hypertrophy and improved diastolic function in spontaneously hypertensive rats. We report here a novel effect of the gemigliptin on I/R injury and hypertension.

An Approximation Method in Bayesian Prediction of Nuclear Power Plant Accidents (원자력 발전소 사고의 근사적인 베이지안 예측기법)

  • Yang, Hee-Joong
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.16 no.2
    • /
    • pp.135-147
    • /
    • 1990
  • A nuclear power plant can be viewed as a large complex man-machine system where high system reliability is obtained by ensuring that sub-systems are designed to operate at a very high level of performance. The chance of severe accident involving at least partial core-melt is very low but once it happens the consequence is very catastrophic. The prediction of risk in low probability, high-risk incidents must be examined in the contest of general engineering knowledge and operational experience. Engineering knowledge forms part of the prior information that must be quantified and then updated by statistical evidence gathered from operational experience. Recently, Bayesian procedures have been used to estimate rate of accident and to predict future risks. The Bayesian procedure has advantages in that it efficiently incorporates experts opinions and, if properly applied, it adaptively updates the model parameters such as the rate or probability of accidents. But at the same time it has the disadvantages of computational complexity. The predictive distribution for the time to next incident can not always be expected to end up with a nice closed form even with conjugate priors. Thus we often encounter a numerical integration problem with high dimensions to obtain a predictive distribution, which is practically unsolvable for a model that involves many parameters. In order to circumvent this difficulty, we propose a method of approximation that essentially breaks down a problem involving many integrations into several repetitive steps so that each step involves only a small number of integrations.

  • PDF

A Study on the Statistical Model Validation using Response-adaptive Experimental Design (반응적응 시험설계법을 이용하는 통계적 해석모델 검증 기법 연구)

  • Jung, Byung Chang;Huh, Young-Chul;Moon, Seok-Jun;Kim, Young Joong
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2014.10a
    • /
    • pp.347-349
    • /
    • 2014
  • Model verification and validation (V&V) is a current research topic to build computational models with high predictive capability by addressing the general concepts, processes and statistical techniques. The hypothesis test for validity check is one of the model validation techniques and gives a guideline to evaluate the validity of a computational model when limited experimental data only exist due to restricted test resources (e.g., time and budget). The hypothesis test for validity check mainly employ Type I error, the risk of rejecting the valid computational model, for the validity evaluation since quantification of Type II error is not feasible for model validation. However, Type II error, the risk of accepting invalid computational model, should be importantly considered for an engineered products having high risk on predicted results. This paper proposes a technique named as the response-adaptive experimental design to reduce Type II error by adaptively designing experimental conditions for the validation experiment. A tire tread block problem and a numerical example are employed to show the effectiveness of the response-adaptive experimental design for the validity evaluation.

  • PDF

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Under-use of Radiotherapy in Stage III Bronchioaveolar Lung Cancer and Socio-economic Disparities in Cause Specific Survival: a Population Study

  • Cheung, Min Rex
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.9
    • /
    • pp.4091-4094
    • /
    • 2014
  • Background: This study used the receiver operating characteristic curve (ROC) to analyze Surveillance, Epidemiology and End Results (SEER) bronchioaveolar carcinoma data to identify predictive models and potential disparity in outcomes. Materials and Methods: Socio-economic, staging and treatment factors were assessed. For the risk modeling, each factor was fitted by a Generalized Linear Model to predict cause specific survival. The area under the ROC was computed. Similar strata were combined to construct the most parsimonious models. A random sampling algorithm was used to estimate modeling errors. Risk of cause specific death was computed for the predictors for comparison. Results: There were 7,309 patients included in this study. The mean follow up time (S.D.) was 24.2 (20) months. Female patients outnumbered male ones 3:2. The mean (S.D.) age was 70.1 (10.6) years. Stage was the most predictive factor of outcome (ROC area of 0.76). After optimization, several strata were fused, with a comparable ROC area of 0.75. There was a 4% additional risk of death associated with lower county family income, African American race, rural residency and lower than 25% county college graduate. Radiotherapy had not been used in 2/3 of patients with stage III disease. Conclusions: There are socio-economic disparities in cause specific survival. Under-use of radiotherapy may have contributed to poor outcome. Improving education, access and rates of radiotherapy use may improve outcome.

Development of a Malignancy Potential Binary Prediction Model Based on Deep Learning for the Mitotic Count of Local Primary Gastrointestinal Stromal Tumors

  • Jiejin Yang;Zeyang Chen;Weipeng Liu;Xiangpeng Wang;Shuai Ma;Feifei Jin;Xiaoying Wang
    • Korean Journal of Radiology
    • /
    • v.22 no.3
    • /
    • pp.344-353
    • /
    • 2021
  • Objective: The mitotic count of gastrointestinal stromal tumors (GIST) is closely associated with the risk of planting and metastasis. The purpose of this study was to develop a predictive model for the mitotic index of local primary GIST, based on deep learning algorithm. Materials and Methods: Abdominal contrast-enhanced CT images of 148 pathologically confirmed GIST cases were retrospectively collected for the development of a deep learning classification algorithm. The areas of GIST masses on the CT images were retrospectively labelled by an experienced radiologist. The postoperative pathological mitotic count was considered as the gold standard (high mitotic count, > 5/50 high-power fields [HPFs]; low mitotic count, ≤ 5/50 HPFs). A binary classification model was trained on the basis of the VGG16 convolutional neural network, using the CT images with the training set (n = 108), validation set (n = 20), and the test set (n = 20). The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated at both, the image level and the patient level. The receiver operating characteristic curves were generated on the basis of the model prediction results and the area under curves (AUCs) were calculated. The risk categories of the tumors were predicted according to the Armed Forces Institute of Pathology criteria. Results: At the image level, the classification prediction results of the mitotic counts in the test cohort were as follows: sensitivity 85.7% (95% confidence interval [CI]: 0.834-0.877), specificity 67.5% (95% CI: 0.636-0.712), PPV 82.1% (95% CI: 0.797-0.843), NPV 73.0% (95% CI: 0.691-0.766), and AUC 0.771 (95% CI: 0.750-0.791). At the patient level, the classification prediction results in the test cohort were as follows: sensitivity 90.0% (95% CI: 0.541-0.995), specificity 70.0% (95% CI: 0.354-0.919), PPV 75.0% (95% CI: 0.428-0.933), NPV 87.5% (95% CI: 0.467-0.993), and AUC 0.800 (95% CI: 0.563-0.943). Conclusion: We developed and preliminarily verified the GIST mitotic count binary prediction model, based on the VGG convolutional neural network. The model displayed a good predictive performance.

Development of Prediction Model for Diabetes Using Machine Learning

  • Kim, Duck-Jin;Quan, Zhixuan
    • Korean Journal of Artificial Intelligence
    • /
    • v.6 no.1
    • /
    • pp.16-20
    • /
    • 2018
  • The development of modern information technology has increased the amount of big data about patients' information and diseases. In this study, we developed a prediction model of diabetes using the health examination data provided by the public data portal in 2016. In addition, we graphically visualized diabetes incidence by sex, age, residence area, and income level. As a result, the incidence of diabetes was different in each residence area and income level, and the probability of accurately predicting male and female was about 65%. In addition, it can be confirmed that the influence of X on male and Y on female is highly to affect diabetes. This predictive model can be used to predict the high-risk patients and low-risk patients of diabetes and to alarm the serious patients, thereby dramatically improving the re-admission rate. Ultimately it will be possible to contribute to improve public health and reduce chronic disease management cost by continuous target selection and management.