• Title/Summary/Keyword: dimension-reduction

Search Result 534, Processing Time 0.022 seconds

Generalized Partially Double-Index Model: Bootstrapping and Distinguishing Values

  • Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.3
    • /
    • pp.305-312
    • /
    • 2015
  • We extend a generalized partially linear single-index model and newly define a generalized partially double-index model (GPDIM). The philosophy of sufficient dimension reduction is adopted in GPDIM to estimate unknown coefficient vectors in the model. Subsequently, various combinations of popular sufficient dimension reduction methods are constructed with the best combination among many candidates determined through a bootstrapping procedure that measures distances between subspaces. Distinguishing values are newly defined to match the estimates to the corresponding population coefficient vectors. One of the strengths of the proposed model is that it can investigate the appropriateness of GPDIM over a single-index model. Various numerical studies confirm the proposed approach, and real data application are presented for illustration purposes.

Reliability Analysis Using Dimension Reduction Method with Variable Sampling Points (가변적인 샘플링을 이용한 차원 감소법에 의한 신뢰도 해석 기법)

  • Yook, Sun-Min;Min, Jun-Hong;Kim, Dong-Ho;Choi, Dong-Hoon
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.33 no.9
    • /
    • pp.870-877
    • /
    • 2009
  • This study provides how the Dimension Reduction (DR) method as an efficient technique for reliability analysis can acquire its increased efficiency when it is applied to highly nonlinear problems. In the highly nonlinear engineering systems, 4N+1 (N: number of random variables) sampling is generally recognized to be appropriate. However, there exists uncertainty concerning the standard for judgment of non-linearity of the system as well as possibility of diverse degrees of non-linearity according to each of the random variables. In this regard, this study judged the linearity individually on each random variable after 2N+1 sampling. If high non-linearity appeared, 2 additional sampling was administered on each random variable to apply the DR method. The applications of the proposed sampling to the examples produced the constant results with increased efficiency.

Face recognition by PLS

  • Baek, Jang-Sun
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2003.10a
    • /
    • pp.69-72
    • /
    • 2003
  • The paper considers partial least squares (PLS) as a new dimension reduction technique for the feature vector to overcome the small sample size problem in face recognition. Principal component analysis (PCA), a conventional dimension reduction method, selects the components with maximum variability, irrespective of the class information. So PCA does not necessarily extract features that are important for the discrimination of classes. PLS, on the other hand, constructs the components so that the correlation between the class variable and themselves is maximized. Therefore PLS components are more predictive than PCA components in classification. The experimental results on Manchester and ORL databases show that PLS is to be preferred over PCA when classification is the goal and dimension reduction is needed.

  • PDF

Improving Dimension Reduction Method Using Kriging Interpolation (Kriging 보간법을 사용한 개선된 차원감소법)

  • Choi, Joo-Ho;Choi, Chang-Hyun
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2007.04a
    • /
    • pp.135-140
    • /
    • 2007
  • In this paper, an Improved Dimension Reduction(IDR) method is proposed for uncertainty quantification that employes Kriging interpolation technic. It has been acknowledged that the DR method is accurate and efficient for assessing statistical moments and reliability due to the sensitivity free feature. However, the DR method has a number of drawbacks such as instability and inaccuracy for problems with increased nonlineality. In this paper, improved DR is implanted by three steps. First, the Kriging interpolation method is used to accurately approximate the responses. Second, 2N+1 and 4N+1 ADOEs are proposed to maintain high accuracy of the method for UQ analysis. Third, numerical integration scheme is used with accurate but free response values at any set of integration points of the surrogated model.

  • PDF

Action Recognition with deep network features and dimension reduction

  • Li, Lijun;Dai, Shuling
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.832-854
    • /
    • 2019
  • Action recognition has been studied in computer vision field for years. We present an effective approach to recognize actions using a dimension reduction method, which is applied as a crucial step to reduce the dimensionality of feature descriptors after extracting features. We propose to use sparse matrix and randomized kd-tree to modify it and then propose modified Local Fisher Discriminant Analysis (mLFDA) method which greatly reduces the required memory and accelerate the standard Local Fisher Discriminant Analysis. For feature encoding, we propose a useful encoding method called mix encoding which combines Fisher vector encoding and locality-constrained linear coding to get the final video representations. In order to add more meaningful features to the process of action recognition, the convolutional neural network is utilized and combined with mix encoding to produce the deep network feature. Experimental results show that our algorithm is a competitive method on KTH dataset, HMDB51 dataset and UCF101 dataset when combining all these methods.

A selective review of nonlinear sufficient dimension reduction

  • Sehun Jang;Jun Song
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.2
    • /
    • pp.247-262
    • /
    • 2024
  • In this paper, we explore nonlinear sufficient dimension reduction (SDR) methods, with a primary focus on establishing a foundational framework that integrates various nonlinear SDR methods. We illustrate the generalized sliced inverse regression (GSIR) and the generalized sliced average variance estimation (GSAVE) which are fitted by the framework. Further, we delve into nonlinear extensions of inverse moments through the kernel trick, specifically examining the kernel sliced inverse regression (KSIR) and kernel canonical correlation analysis (KCCA), and explore their relationships within the established framework. We also briefly explain the nonlinear SDR for functional data. In addition, we present practical aspects such as algorithmic implementations. This paper concludes with remarks on the dimensionality problem of the target function class.

Comparison of Methods for Reducing the Dimension of Compositional Data with Zero Values

  • Song, Taeg-Youn;Choi, Byung-Jin
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.4
    • /
    • pp.559-569
    • /
    • 2012
  • Compositional data consist of compositions that are non-negative vectors of proportions with the unit-sum constraint. In disciplines such as petrology and archaeometry, it is fundamental to statistically analyze this type of data. Aitchison (1983) introduced a log-contrast principal component analysis that involves logratio transformed data, as a dimension-reduction technique to understand and interpret the structure of compositional data. However, the analysis is not usable when zero values are present in the data. In this paper, we introduce 4 possible methods to reduce the dimension of compositional data with zero values. Two real data sets are analyzed using the methods and the obtained results are compared.

Dimension Reduction Method of Speech Feature Vector for Real-Time Adaptation of Voice Activity Detection (음성구간 검출기의 실시간 적응화를 위한 음성 특징벡터의 차원 축소 방법)

  • Park Jin-Young;Lee Kwang-Seok;Hur Kang-In
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.7 no.3
    • /
    • pp.116-121
    • /
    • 2006
  • In this paper, we propose the dimension reduction method of multi-dimension speech feature vector for real-time adaptation procedure in various noisy environments. This method which reduces dimensions non-linearly to map the likelihood of speech feature vector and noise feature vector. The LRT(Likelihood Ratio Test) is used for classifying speech and non-speech. The results of implementation are similar to multi-dimensional speech feature vector. The results of speech recognition implementation of detected speech data are also similar to multi-dimensional(10-order dimensional MFCC(Mel-Frequency Cepstral Coefficient)) speech feature vector.

  • PDF

An investigation of subband decomposition and feature-dimension reduction for musical genre classification (음악 장르 분류를 위한 부밴드 분해와 특징 차수 축소에 관한 연구)

  • Seo, Jin Soo;Kim, Junghyun;Park, Jihyun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.36 no.2
    • /
    • pp.144-150
    • /
    • 2017
  • Musical genre is indispensible in constructing music information retrieval system, such as music search and classification. In general, the spectral characteristics of a music signal are obtained based on a subband decomposition to represent the relative distribution of the harmonic and the non-harmonic components. In this paper, we investigate the subband decomposition parameters in extracting features, which improves musical genre classification accuracy. In addition, the linear projection methods are studied to reduce the resulting feature dimension. Experiments on the widely used music datasets confirmed that the subband decomposition finer than the widely-adopted octave scale is conducive in improving genre-classification accuracy and showed that the feature-dimension reduction is effective reducing a classifier's computational complexity.

Volatility Analysis for Multivariate Time Series via Dimension Reduction (차원축소를 통한 다변량 시계열의 변동성 분석 및 응용)

  • Song, Eu-Gine;Choi, Moon-Sun;Hwang, S.Y.
    • Communications for Statistical Applications and Methods
    • /
    • v.15 no.6
    • /
    • pp.825-835
    • /
    • 2008
  • Multivariate GARCH(MGARCH) has been useful in financial studies and econometrics for modeling volatilities and correlations between components of multivariate time series. An obvious drawback lies in that the number of parameters increases rapidly with the number of variables involved. This thesis tries to resolve the problem by using dimension reduction technique. We briefly review both factor models for dimension reduction and the MGARCH models including EWMA (Exponentially weighted moving-average model), DVEC(Diagonal VEC model), BEKK and CCC(Constant conditional correlation model). We create meaningful portfolios obtained after reducing dimension through statistical factor models and fundamental factor models and in turn these portfolios are applied to MGARCH. In addition, we compare portfolios by assessing MSE, MAD(Mean absolute deviation) and VaR(Value at Risk). Various financial time series are analyzed for illustration.