• Title/Summary/Keyword: Adjusted Model

Search Result 1,294, Processing Time 0.03 seconds

Impact of Risk Adjustment with Insurance Claims Data on Cesarean Delivery Rates of Healthcare Organizations in Korea (건강보험 청구명세서 자료를 이용한 제왕절개 분만율 위험도 보정의 효과)

  • Lee, Sang-Il;Seo, Kyung;Do, Young-Mi;Lee, Kwang-Soo
    • Journal of Preventive Medicine and Public Health
    • /
    • v.38 no.2
    • /
    • pp.132-140
    • /
    • 2005
  • Objectives: To propose a risk-adjustment model from insurance claims data, and analyze the changes in cesarean section rates of healthcare organizations after adjusting for risk distribution. Methods: The study sample included delivery claims data from January to September, 2003. A risk-adjustment model was built using the 1st quarter data, and the 2nd and 3rd quarter data were used for a validation test. Patients' risk factors were adjusted using a logistic regression analysis. The c-statistic and Hosmer-Lemeshow test were used to evaluate the performance of the risk-adjustment model. Crude, predicted and risk-adjusted rates were calculated, and compared to analyze the effects of the adjustment. Results: Nine risk factors (malpresentation, eclampsia, malignancy, multiple pregnancies, problems in the placenta, previous Cesarean section, older mothers, bleeding and diabetes) were included in the final risk-adjustment model, and were found to have statistically significant effects on the mode of delivery. The c-statistic (0.78) and Hosmer-Lemeshow test ($x^2$=0.60, p=0.439) indicated a good model performance. After applying the 2nd and 3rd quarter data to the model, there were no differences in the c-statistic and Hosmer-Lemeshow $x^2$. Also, risk factor adjustment led to changes in the ranking of hospital Cesarean section rates, especially in tertiary and general hospitals. Conclusion: This study showed a model performance, using medical record abstracted data, was comparable to the results of previous studies. Insurance claims data can be used for identifying areas where risk factors should be adjusted. The changes in the ranking of hospital Cesarean section rates implied that crude rates can mislead people and therefore, the risk should be adjusted before the rates are released to the public. The proposed risk-adjustment model can be applied for the fair comparisons of the rates between hospitals.

The Variation Factors of Severity-Adjusted Length of Stay in CABG (관상동맥우회술 시행환자의 중증도 보정 재원일수 변이에 관한 연구)

  • Kim, Sun-Ja;Kang, Sung-Hong;Kim, Won-Joong;Kim, Yoo-Mi
    • Journal of Korean Society for Quality Management
    • /
    • v.39 no.3
    • /
    • pp.391-399
    • /
    • 2011
  • Our study was carried out to analyze the variation factors of severity-adjusted length of stay(LOS) in coronary artery bypass graft(CABG). The subjects were 932 CABG inpatients of the Korean National Hospital Discharge In-depth Injury Survey from 2004 through 2008. The data were analyzed using $x^2$ test and the severity-adjusted model was developed using data mining technique. The results of the study were as follows: male(71.1%), older than 61 years of age(61.6%), more than 500 beds(92.8%) and admitting via ambulatory care(70.0%) appeared to have higher rate than otherwise. In-hospital mortality of CABG inpatients was 2.8%. In addition, 46.4% of the patients received their care in other residence. The angina pectoris(45.6%) was found to be the highest in principle diagnosis, followed by chronic ischemic heart disease(36.9%) and acute myocardial infarction(12.0%). We developed severity-adjusted LOS model using the variables such as gender, age and comorbidity. Comparison of adjusted values in predicted LOS revealed that there were significant variations in LOS by location of hospital, bed size, and whether patients received the care in their residences. The variations of LOS can be explained as the indirect indicator for quality variation of medical process. It is suggested that the severity-adjusted LOS model developed in this study should be utilized as a useful method for benchmarking in hospital and it is necessary that national standard clinical practice guideline should be developed.

Comparative Study on Imputation Procedures in Exponential Regression Model with missing values

  • Park, Young-Sool;Kim, Soon-Kwi
    • Journal of the Korean Data and Information Science Society
    • /
    • v.14 no.2
    • /
    • pp.143-152
    • /
    • 2003
  • A data set having missing observations is often completed by using imputed values. In this paper, performances and accuracy of five imputation procedures are evaluated when missing values exist only on the response variable in the exponential regression model. Our simulation results show that adjusted exponential regression imputation procedure can be well used to compensate for missing data, in particular, compared to other imputation procedures. An illustrative example using real data is provided.

  • PDF

Likelihood-Based Inference on Genetic Variance Component with a Hierarchical Poisson Generalized Linear Mixed Model

  • Lee, C.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.13 no.8
    • /
    • pp.1035-1039
    • /
    • 2000
  • This study developed a Poisson generalized linear mixed model and a procedure to estimate genetic parameters for count traits. The method derived from a frequentist perspective was based on hierarchical likelihood, and the maximum adjusted profile hierarchical likelihood was employed to estimate dispersion parameters of genetic random effects. Current approach is a generalization of Henderson's method to non-normal data, and was applied to simulated data. Underestimation was observed in the genetic variance component estimates for the data simulated with large heritability by using the Poisson generalized linear mixed model and the corresponding maximum adjusted profile hierarchical likelihood. However, the current method fitted the data generated with small heritability better than those generated with large heritability.

The Variation of Factors of severity-adjusted length of stay(LOS) in acute stroke patients (급성 뇌졸중 환자의 중증도 보정 재원일수 변이에 관한 연구)

  • Kang, Sung-Hong;Seok, Hyang-Sook;Kim, Won-Joong
    • Journal of Digital Convergence
    • /
    • v.11 no.6
    • /
    • pp.221-233
    • /
    • 2013
  • This study aims to develop the severity-adjusted length of stay(LOS) model for acute stroke patients using data from the hospital discharge survey and propose management of length of stay(LOS) for acute stroke patients and using for Hospital management. The dataset was taken from 23,134 database of the hospital discharge survey from 2004 to 2009. The severity-adjusted LOS model for the acute stroke patients was developed by data mining analysis. From decision making tree model, the main reasons for LOS of acute stroke patients were acute stroke type. The difference between severity-adjusted LOS from the decision making tree model and real LOS was compared and it was confirmed that insurance type and bed number of hospital, location of hospital were statistically associated with LOS. And to conclude, hospitals should manage the LOS of acute stroke patients applying it into the medical information system.

Detection of superior genotype of fatty acid synthase in Korean native cattle by an environment-adjusted statistical model

  • Lee, Jea-Young;Oh, Dong-Yep;Kim, Hyun-Ji;Jang, Gab-Sue;Lee, Seung-Uk
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.30 no.6
    • /
    • pp.765-772
    • /
    • 2017
  • Objective: This study examines the genetic factors influencing the phenotypes (four economic traits:oleic acid [C18:1], monounsaturated fatty acids, carcass weight, and marbling score) of Hanwoo. Methods: To enhance the accuracy of the genetic analysis, the study proposes a new statistical model that excludes environmental factors. A statistically adjusted, analysis of covariance model of environmental and genetic factors was developed, and estimated environmental effects (covariate effects of age and effects of calving farms) were excluded from the model. Results: The accuracy was compared before and after adjustment. The accuracy of the best single nucleotide polymorphism (SNP) in C18:1 increased from 60.16% to 74.26%, and that of the two-factor interaction increased from 58.69% to 87.19%. Also, superior SNPs and SNP interactions were identified using the multifactor dimensionality reduction method in Table 1 to 4. Finally, high- and low-risk genotypes were compared based on their mean scores for each trait. Conclusion: The proposed method significantly improved the analysis accuracy and identified superior gene-gene interactions and genotypes for each of the four economic traits of Hanwoo.

A Study on the Performance Analysis between Conglomerate and Non-conglomerate M&A (다각화 합병과 비다각화 합병간의 성과분석)

  • 김동환;김안생;김종천
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.4 no.4
    • /
    • pp.422-427
    • /
    • 2003
  • The purpose of this study analyzes the effects of M&A between conglomerate and non-conglomerate corporational with 57 samples of firms during the period from 1990 to 1997 right before IMF. financial crisis. These models employed to measure effects of M&A in this paper are both market model and market adjusted return model using test of t-statistics. Results of this article show that negative excess returns are observed for non-conglomerate mergers and positive excess gains are exhibited for conglomerate mergers. This implies that conglomerate mergers are more effective than firm specialization in terms of merger effects.

  • PDF

An Empirical Analysis On The Effects Of M&A Between The Merging Firms And The Merged Firms

  • Kim, Dong-Hwan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.4 no.4
    • /
    • pp.428-433
    • /
    • 2003
  • In this study. we empirically compared and investigated the impacts and effects of M&A between the merging firms and the merged firms during the period from 1990 to 1997 which the developed countries' market principles were adopted and more autonomous and competitive M&A market were activated. For this purpose, this paper has set hypothesis and tested by analyzing those AAR, and CARs employing both market model and market adjusted model. The empirical results revealed in this research show that the CAR is more positive for merged firms than merging firms which are contrast with results of previous studies researched in 1980s.

  • PDF

Optimum Risk-Adjusted Islamic Stock Portfolio Using the Quadratic Programming Model: An Empirical Study in Indonesia

  • MUSSAFI, Noor Saif Muhammad;ISMAIL, Zuhaimy
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.8 no.5
    • /
    • pp.839-850
    • /
    • 2021
  • Risk-adjusted return is believed to be one of the optimal parameters to determine an optimum portfolio. A risk-adjusted return is a calculation of the profit or potential profit from an investment that takes into account the degree of risk that must be accepted to achieve it. This paper presents a new procedure in portfolio selection and utilizes these results to optimize the risk level of risk-adjusted Islamic stock portfolios. It deals with the weekly close price of active issuers listed on Jakarta Islamic Index Indonesia for a certain time interval. Overall, this paper highlights portfolio selection, which includes determining the number of stocks, grouping the issuers via technical analysis, and selecting the best risk-adjusted return of portfolios. The nominated portfolio is modeled using Quadratic Programming (QP). The result of this study shows that the portfolio built using the lowest Value at Risk (VaR) outperforms the market proxy on a risk-adjusted basis of M-squared and was chosen as the best portfolio that can be optimized using QP with a minimum risk of 2.86%. The portfolio with the lowest beta, on the other hand, will produce a minimum risk that is nearly 60% lower than the optimal risk-adjusted return portfolio. The results of QP are well verified by a heuristic optimizer of fmincon.

The Study on Flood Runoff Simulation using Runoff Model with Gauge-adjusted Radar data (보정 레이더 자료와 유출 모형을 이용한 홍수유출모의에 관한 연구)

  • Bae, Young-Hye;Kim, Byung-Sik;Kim, Hung-Soo
    • Journal of Wetlands Research
    • /
    • v.12 no.1
    • /
    • pp.51-61
    • /
    • 2010
  • Changes in climate have largely increased concentrated heavy rainfall, which in turn is causing enormous damages to humans and properties. Therefore, it is important to understand the spatial-temporal features of rainfall. In this study, RADAR rainfall was used to calculate gridded areal rainfall which reflects the spatial-temporal variability. In addition, Kalman-filter method, a stochastical technique, was used to combine ground rainfall network with RADAR rainfall network to calculate areal rainfall. Thiessen polygon method, Inverse distance weighting method, and Kriging method were used for calculating areal rainfall, and the calculated data was compared with adjusted areal RADAR rainfall measured using the Kalman-filter method. The result showed that RADAR rainfall adjusted with Kalman-filter method well-reproduced the distribution of raw RADAR rainfall which has a similar spatial distribution as the actual rainfall distribution. The adjusted RADAR rainfall also showed a similar rainfall volume as the volume shown in rain gauge data. Anseong-Cheon basin was used as a study area and the RADAR rainfall adjusted with Kalman-filter method was applied in $Vflo^{TM}$ model, a physical-based distributed model, and ModClark model, a semi-distributed model. As a result, $Vflo^{TM}$ model simulated peak time and peak value similar to that of observed hydrograph. ModClark model showed good results for total runoff volume. However, for verifying the parameter, $Vflo^{TM}$ model showed better reproduction of observed hydrograph than ModClark model. These results confirmed that flood runoff simulation is applicable in domestic settings(in South Korea) if highly accurate areal rainfall is calculated by combining gauge rainfall and RADAR rainfall data and the simulation is performed in link to the distributed hydrological model.