• Title/Summary/Keyword: sliced inverse regression

Search Result 22, Processing Time 0.018 seconds

A comparison study of inverse censoring probability weighting in censored regression (중도절단 회귀모형에서 역절단확률가중 방법 간의 비교연구)

  • Shin, Jungmin;Kim, Hyungwoo;Shin, Seung Jun
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.6
    • /
    • pp.957-968
    • /
    • 2021
  • Inverse censoring probability weighting (ICPW) is a popular technique in survival data analysis. In applications of the ICPW technique such as the censored regression, it is crucial to accurately estimate the censoring probability. A simulation study is undertaken in this article to see how censoring probability estimate influences model performance in censored regression using the ICPW scheme. We compare three censoring probability estimators, including Kaplan-Meier (KM) estimator, Cox proportional hazard model estimator, and local KM estimator. For the local KM estimator, we propose to reduce the predictor dimension to avoid the curse of dimensionality and consider two popular dimension reduction tools: principal component analysis and sliced inverse regression. Finally, we found that the Cox proportional hazard model estimator shows the best performance as a censoring probability estimator in both mean and median censored regressions.

A Short Note on Empirical Penalty Term Study of BIC in K-means Clustering Inverse Regression

  • Ahn, Ji-Hyun;Yoo, Jae-Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.18 no.3
    • /
    • pp.267-275
    • /
    • 2011
  • According to recent studies, Bayesian information criteria(BIC) is proposed to determine the structural dimension of the central subspace through sliced inverse regression(SIR) with high-dimensional predictors. The BIC may be useful in K-means clustering inverse regression(KIR) with high-dimensional predictors. However, the direct application of the BIC to KIR may be problematic, because the slicing scheme in SIR is not the same as that of KIR. In this paper, we present empirical penalty term studies of BIC in KIR to identify the most appropriate one. Numerical studies and real data analysis are presented.

Generalization of Fisher′s linear discriminant analysis via the approach of sliced inverse regression

  • Chen, Chun-Houh;Li, Ker-Chau
    • Journal of the Korean Statistical Society
    • /
    • v.30 no.2
    • /
    • pp.193-217
    • /
    • 2001
  • Despite of the rich literature in discriminant analysis, this complicated subject remains much to be explored. In this article, we study the theoretical foundation that supports Fisher's linear discriminant analysis (LDA) by setting up the classification problem under the dimension reduction framework as in Li(1991) for introducing sliced inverse regression(SIR). Through the connection between SIR and LDA, our theory helps identify sources of strength and weakness in using CRIMCOORDS(Gnanadesikan 1977) as a graphical tool for displaying group separation patterns. This connection also leads to several ways of generalizing LDA for better exploration and exploitation of nonlinear data patterns.

  • PDF

Nonparametric test on dimensionality of explantory variables (설명변수 차원 축소에 관한 비모수적 검정)

  • 서한손
    • The Korean Journal of Applied Statistics
    • /
    • v.8 no.2
    • /
    • pp.65-75
    • /
    • 1995
  • For the determination of dimension of e.d.r. space, both of Sliced Inverse Regression (SIR) and Principal Hessian Directions (PHD) proposed asymptotic test. But the asymptotic test requires the normality and large samples of explanatory variables. Cook and Weisberg(1991) suggested permutation tests instead. In this study permutation tests are actually made, and the power of them is compared with asymptotic test in the case of SIR and PHD.

  • PDF

An Empirical Study on Dimension Reduction

  • Suh, Changhee;Lee, Hakbae
    • Journal of the Korean Data Analysis Society
    • /
    • v.20 no.6
    • /
    • pp.2733-2746
    • /
    • 2018
  • The two inverse regression estimation methods, SIR and SAVE to estimate the central space are computationally easy and are widely used. However, SIR and SAVE may have poor performance in finite samples and need strong assumptions (linearity and/or constant covariance conditions) on predictors. The two non-parametric estimation methods, MAVE and dMAVE have much better performance for finite samples than SIR and SAVE. MAVE and dMAVE need no strong requirements on predictors or on the response variable. MAVE is focused on estimating the central mean subspace, but dMAVE is to estimate the central space. This paper explores and compares four methods to explain the dimension reduction. Each algorithm of these four methods is reviewed. Empirical study for simulated data shows that MAVE and dMAVE has relatively better performance than SIR and SAVE, regardless of not only different models but also different distributional assumptions of predictors. However, real data example with the binary response demonstrates that SAVE is better than other methods.

Fused inverse regression with multi-dimensional responses

  • Cho, Youyoung;Han, Hyoseon;Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.3
    • /
    • pp.267-279
    • /
    • 2021
  • A regression with multi-dimensional responses is quite common nowadays in the so-called big data era. In such regression, to relieve the curse of dimension due to high-dimension of responses, the dimension reduction of predictors is essential in analysis. Sufficient dimension reduction provides effective tools for the reduction, but there are few sufficient dimension reduction methodologies for multivariate regression. To fill this gap, we newly propose two fused slice-based inverse regression methods. The proposed approaches are robust to the numbers of clusters or slices and improve the estimation results over existing methods by fusing many kernel matrices. Numerical studies are presented and are compared with existing methods. Real data analysis confirms practical usefulness of the proposed methods.

Dimension reduction for right-censored survival regression: transformation approach

  • Yoo, Jae Keun;Kim, Sung-Jin;Seo, Bi-Seul;Shin, Hyejung;Sim, Su-Ah
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.3
    • /
    • pp.259-268
    • /
    • 2016
  • High-dimensional survival data with large numbers of predictors has become more common. The analysis of such data can be facilitated if the dimensions of predictors are adequately reduced. Recent studies show that a method called sliced inverse regression (SIR) is an effective dimension reduction tool in high-dimensional survival regression. However, it faces incapability in implementation due to a double categorization procedure. This problem can be overcome in the right-censoring type by transforming the observed survival time and censoring status into a single variable. This provides more flexibility in the categorization, so the applicability of SIR can be enhanced. Numerical studies show that the proposed transforming approach is equally good to (or even better) than the usual SIR application in both balanced and highly-unbalanced censoring status. The real data example also confirms its practical usefulness, so the proposed approach should be an effective and valuable addition to usual statistical practitioners.

More on directional regression

  • Kim, Kyongwon;Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.5
    • /
    • pp.553-562
    • /
    • 2021
  • Directional regression (DR; Li and Wang, 2007) is well-known as an exhaustive sufficient dimension reduction method, and performs well in complex regression models to have linear and nonlinear trends. However, the extension of DR is not well-done upto date, so we will extend DR to accommodate multivariate regression and large p-small n regression. We propose three versions of DR for multivariate regression and discuss how DR is applicable for the latter regression case. Numerical studies confirm that DR is robust to the number of clusters and the choice of hierarchical-clustering or pooled DR.

Integrated Partial Sufficient Dimension Reduction with Heavily Unbalanced Categorical Predictors

  • Yoo, Jae-Keun
    • The Korean Journal of Applied Statistics
    • /
    • v.23 no.5
    • /
    • pp.977-985
    • /
    • 2010
  • In this paper, we propose an approach to conduct partial sufficient dimension reduction with heavily unbalanced categorical predictors. For this, we consider integrated categorical predictors and investigate certain conditions that the integrated categorical predictor is fully informative to partial sufficient dimension reduction. For illustration, the proposed approach is implemented on optimal partial sliced inverse regression in simulation and data analysis.

Case study: application of fused sliced average variance estimation to near-infrared spectroscopy of biscuit dough data (Fused sliced average variance estimation의 실증분석: 비스킷 반죽의 근적외분광분석법 분석 자료로의 적용)

  • Um, Hye Yeon;Won, Sungmin;An, Hyoin;Yoo, Jae Keun
    • The Korean Journal of Applied Statistics
    • /
    • v.31 no.6
    • /
    • pp.835-842
    • /
    • 2018
  • The so-called sliced average variance estimation (SAVE) is a popular methodology in sufficient dimension reduction literature. SAVE is sensitive to the number of slices in practice. To overcome this, a fused SAVE (FSAVE) is recently proposed by combining the kernel matrices obtained from various numbers of slices. In the paper, we consider practical applications of FSAVE to large p-small n data. For this, near-infrared spectroscopy of biscuit dough data is analyzed. In this case study, the usefulness of FSAVE in high-dimensional data analysis is confirmed by showing that the result by FASVE is superior to existing analysis results.