• Title/Summary/Keyword: Sufficient dimension reduction

Search Result 38, Processing Time 0.018 seconds

Note on response dimension reduction for multivariate regression

  • Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.5
    • /
    • pp.519-526
    • /
    • 2019
  • Response dimension reduction in a sufficient dimension reduction (SDR) context has been widely ignored until Yoo and Cook (Computational Statistics and Data Analysis, 53, 334-343, 2008) founded theories for it and developed an estimation approach. Recent research in SDR shows that a semi-parametric approach can outperform conventional non-parametric SDR methods. Yoo (Statistics: A Journal of Theoretical and Applied Statistics, 52, 409-425, 2018) developed a semi-parametric approach for response reduction in Yoo and Cook (2008) context, and Yoo (Journal of the Korean Statistical Society, 2019) completes the semi-parametric approach by proposing an unstructured method. This paper theoretically discusses and provides insightful remarks on three versions of semi-parametric approaches that can be useful for statistical practitioners. It is also possible to avoid numerical instability by presenting the results for an orthogonal transformation of the response variables.

On robustness in dimension determination in fused sliced inverse regression

  • Yoo, Jae Keun;Cho, Yoo Na
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.5
    • /
    • pp.513-521
    • /
    • 2018
  • The goal of sufficient dimension reduction (SDR) is to replace original p-dimensional predictors with a lower-dimensional linearly transformed predictor. The sliced inverse regression (SIR) (Li, Journal of the American Statistical Association, 86, 316-342, 1991) is one of the most popular SDR methods because of its applicability and simple implementation in practice. However, SIR may yield different dimension reduction results for different numbers of slices and despite its popularity, is a clear deficit for SIR. To overcome this, a fused sliced inverse regression was recently proposed. The study shows that the dimension-reduced predictors is robust to the numbers of the slices, but it does not investigate how robust its dimension determination is. This paper suggests a permutation dimension determination for the fused sliced inverse regression that is compared with SIR to investigate the robustness to the numbers of slices in the dimension determination. Numerical studies confirm this and a real data example is presented.

Intensive comparison of semi-parametric and non-parametric dimension reduction methods in forward regression

  • Shin, Minju;Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.5
    • /
    • pp.615-627
    • /
    • 2022
  • Principal Fitted Component (PFC) is a semi-parametric sufficient dimension reduction (SDR) method, which is originally proposed in Cook (2007). According to Cook (2007), the PFC has a connection with other usual non-parametric SDR methods. The connection is limited to sliced inverse regression (Li, 1991) and ordinary least squares. Since there is no direct comparison between the two approaches in various forward regressions up to date, a practical guidance between the two approaches is necessary for usual statistical practitioners. To fill this practical necessity, in this paper, we newly derive a connection of the PFC to covariance methods (Yin and Cook, 2002), which is one of the most popular SDR methods. Also, intensive numerical studies have done closely to examine and compare the estimation performances of the semi- and non-parametric SDR methods for various forward regressions. The founding from the numerical studies are confirmed in a real data example.

Generalized Partially Double-Index Model: Bootstrapping and Distinguishing Values

  • Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.3
    • /
    • pp.305-312
    • /
    • 2015
  • We extend a generalized partially linear single-index model and newly define a generalized partially double-index model (GPDIM). The philosophy of sufficient dimension reduction is adopted in GPDIM to estimate unknown coefficient vectors in the model. Subsequently, various combinations of popular sufficient dimension reduction methods are constructed with the best combination among many candidates determined through a bootstrapping procedure that measures distances between subspaces. Distinguishing values are newly defined to match the estimates to the corresponding population coefficient vectors. One of the strengths of the proposed model is that it can investigate the appropriateness of GPDIM over a single-index model. Various numerical studies confirm the proposed approach, and real data application are presented for illustration purposes.

A selective review of nonlinear sufficient dimension reduction

  • Sehun Jang;Jun Song
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.2
    • /
    • pp.247-262
    • /
    • 2024
  • In this paper, we explore nonlinear sufficient dimension reduction (SDR) methods, with a primary focus on establishing a foundational framework that integrates various nonlinear SDR methods. We illustrate the generalized sliced inverse regression (GSIR) and the generalized sliced average variance estimation (GSAVE) which are fitted by the framework. Further, we delve into nonlinear extensions of inverse moments through the kernel trick, specifically examining the kernel sliced inverse regression (KSIR) and kernel canonical correlation analysis (KCCA), and explore their relationships within the established framework. We also briefly explain the nonlinear SDR for functional data. In addition, we present practical aspects such as algorithmic implementations. This paper concludes with remarks on the dimensionality problem of the target function class.

Naive Bayes classifiers boosted by sufficient dimension reduction: applications to top-k classification

  • Yang, Su Hyeong;Shin, Seung Jun;Sung, Wooseok;Lee, Choon Won
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.5
    • /
    • pp.603-614
    • /
    • 2022
  • The naive Bayes classifier is one of the most straightforward classification tools and directly estimates the class probability. However, because it relies on the independent assumption of the predictor, which is rarely satisfied in real-world problems, its application is limited in practice. In this article, we propose employing sufficient dimension reduction (SDR) to substantially improve the performance of the naive Bayes classifier, which is often deteriorated when the number of predictors is not restrictively small. This is not surprising as SDR reduces the predictor dimension without sacrificing classification information, and predictors in the reduced space are constructed to be uncorrelated. Therefore, SDR leads the naive Bayes to no longer be naive. We applied the proposed naive Bayes classifier after SDR to build a recommendation system for the eyewear-frames based on customers' face shape, demonstrating its utility in the top-k classification problem.

More on directional regression

  • Kim, Kyongwon;Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.5
    • /
    • pp.553-562
    • /
    • 2021
  • Directional regression (DR; Li and Wang, 2007) is well-known as an exhaustive sufficient dimension reduction method, and performs well in complex regression models to have linear and nonlinear trends. However, the extension of DR is not well-done upto date, so we will extend DR to accommodate multivariate regression and large p-small n regression. We propose three versions of DR for multivariate regression and discuss how DR is applicable for the latter regression case. Numerical studies confirm that DR is robust to the number of clusters and the choice of hierarchical-clustering or pooled DR.

DR-LSTM: Dimension reduction based deep learning approach to predict stock price

  • Ah-ram Lee;Jae Youn Ahn;Ji Eun Choi;Kyongwon Kim
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.2
    • /
    • pp.213-234
    • /
    • 2024
  • In recent decades, increasing research attention has been directed toward predicting the price of stocks in financial markets using deep learning methods. For instance, recurrent neural network (RNN) is known to be competitive for datasets with time-series data. Long short term memory (LSTM) further improves RNN by providing an alternative approach to the gradient loss problem. LSTM has its own advantage in predictive accuracy by retaining memory for a longer time. In this paper, we combine both supervised and unsupervised dimension reduction methods with LSTM to enhance the forecasting performance and refer to this as a dimension reduction based LSTM (DR-LSTM) approach. For a supervised dimension reduction method, we use methods such as sliced inverse regression (SIR), sparse SIR, and kernel SIR. Furthermore, principal component analysis (PCA), sparse PCA, and kernel PCA are used as unsupervised dimension reduction methods. Using datasets of real stock market index (S&P 500, STOXX Europe 600, and KOSPI), we present a comparative study on predictive accuracy between six DR-LSTM methods and time series modeling.

THE DIMENSION REDUCTION ALGORITHM FOR THE POSITIVE REALIZATION OF DISCRETE PHASE-TYPE DISTRIBUTIONS

  • Kim, Kyung-Sup
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.16 no.1
    • /
    • pp.51-64
    • /
    • 2012
  • This paper provides an efficient dimension reduction algorithm of the positive realization of discrete phase type(DPH) distributions. The relationship between the representation of DPH distributions and the positive realization of the positive system is explained. The dimension of the positive realization of a discrete phase-type realization may be larger than its McMillan degree of probability generating functions. The positive realization with sufficient large dimension bound can be obtained easily but generally, the minimal positive realization problem is not solved yet. We propose an efficient dimension reduction algorithm to make the positive realization with tighter upper bound from a given probability generating functions in terms of convex cone problem and linear programming.

Effect of Dimension in Optimal Dimension Reduction Estimation for Conditional Mean Multivariate Regression (다변량회귀 조건부 평균모형에 대한 최적 차원축소 방법에서 차원수가 결과에 미치는 영향)

  • Seo, Eun-Kyoung;Park, Chong-Sun
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.1
    • /
    • pp.107-115
    • /
    • 2012
  • Yoo and Cook (2007) developed an optimal sufficient dimension reduction methodology for the conditional mean in multivariate regression and it is known that their method is asymptotically optimal and its test statistic has a chi-squared distribution asymptotically under the null hypothesis. To check the effect of dimension used in estimation on regression coefficients and the explanatory power of the conditional mean model in multivariate regression, we applied their method to several simulated data sets with various dimensions. A small simulation study showed that it is quite helpful to search for an appropriate dimension for a given data set if we use the asymptotic test for the dimension as well as results from the estimation with several dimensions simultaneously.