• 제목/요약/키워드: dimension reduction based methods

검색결과 61건 처리시간 0.021초

DR-LSTM: Dimension reduction based deep learning approach to predict stock price

  • Ah-ram Lee;Jae Youn Ahn;Ji Eun Choi;Kyongwon Kim
    • Communications for Statistical Applications and Methods
    • /
    • 제31권2호
    • /
    • pp.213-234
    • /
    • 2024
  • In recent decades, increasing research attention has been directed toward predicting the price of stocks in financial markets using deep learning methods. For instance, recurrent neural network (RNN) is known to be competitive for datasets with time-series data. Long short term memory (LSTM) further improves RNN by providing an alternative approach to the gradient loss problem. LSTM has its own advantage in predictive accuracy by retaining memory for a longer time. In this paper, we combine both supervised and unsupervised dimension reduction methods with LSTM to enhance the forecasting performance and refer to this as a dimension reduction based LSTM (DR-LSTM) approach. For a supervised dimension reduction method, we use methods such as sliced inverse regression (SIR), sparse SIR, and kernel SIR. Furthermore, principal component analysis (PCA), sparse PCA, and kernel PCA are used as unsupervised dimension reduction methods. Using datasets of real stock market index (S&P 500, STOXX Europe 600, and KOSPI), we present a comparative study on predictive accuracy between six DR-LSTM methods and time series modeling.

Fused inverse regression with multi-dimensional responses

  • Cho, Youyoung;Han, Hyoseon;Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • 제28권3호
    • /
    • pp.267-279
    • /
    • 2021
  • A regression with multi-dimensional responses is quite common nowadays in the so-called big data era. In such regression, to relieve the curse of dimension due to high-dimension of responses, the dimension reduction of predictors is essential in analysis. Sufficient dimension reduction provides effective tools for the reduction, but there are few sufficient dimension reduction methodologies for multivariate regression. To fill this gap, we newly propose two fused slice-based inverse regression methods. The proposed approaches are robust to the numbers of clusters or slices and improve the estimation results over existing methods by fusing many kernel matrices. Numerical studies are presented and are compared with existing methods. Real data analysis confirms practical usefulness of the proposed methods.

Applications of response dimension reduction in large p-small n problems

  • Minjee Kim;Jae Keun Yoo
    • Communications for Statistical Applications and Methods
    • /
    • 제31권2호
    • /
    • pp.191-202
    • /
    • 2024
  • The goal of this paper is to show how multivariate regression analysis with high-dimensional responses is facilitated by the response dimension reduction. Multivariate regression, characterized by multi-dimensional response variables, is increasingly prevalent across diverse fields such as repeated measures, longitudinal studies, and functional data analysis. One of the key challenges in analyzing such data is managing the response dimensions, which can complicate the analysis due to an exponential increase in the number of parameters. Although response dimension reduction methods are developed, there is no practically useful illustration for various types of data such as so-called large p-small n data. This paper aims to fill this gap by showcasing how response dimension reduction can enhance the analysis of high-dimensional response data, thereby providing significant assistance to statistical practitioners and contributing to advancements in multiple scientific domains.

Classification of Microarray Gene Expression Data by MultiBlock Dimension Reduction

  • Oh, Mi-Ra;Kim, Seo-Young;Kim, Kyung-Sook;Baek, Jang-Sun;Son, Young-Sook
    • Communications for Statistical Applications and Methods
    • /
    • 제13권3호
    • /
    • pp.567-576
    • /
    • 2006
  • In this paper, we applied the multiblock dimension reduction methods to the classification of tumor based on microarray gene expressions data. This procedure involves clustering selected genes, multiblock dimension reduction and classification using linear discrimination analysis and quadratic discrimination analysis.

Classification Using Sliced Inverse Regression and Sliced Average Variance Estimation

  • Lee, Hakbae
    • Communications for Statistical Applications and Methods
    • /
    • 제11권2호
    • /
    • pp.275-285
    • /
    • 2004
  • We explore classification analysis using graphical methods such as sliced inverse regression and sliced average variance estimation based on dimension reduction. Some useful information about classification analysis are obtained by sliced inverse regression and sliced average variance estimation through dimension reduction. Two examples are illustrated, and classification rates by sliced inverse regression and sliced average variance estimation are compared with those by discriminant analysis and logistic regression.

Method-Free Permutation Predictor Hypothesis Tests in Sufficient Dimension Reduction

  • Lee, Kyungjin;Oh, Suji;Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • 제20권4호
    • /
    • pp.291-300
    • /
    • 2013
  • In this paper, we propose method-free permutation predictor hypothesis tests in the context of sufficient dimension reduction. Different from an existing method-free bootstrap approach, predictor hypotheses are evaluated based on p-values; therefore, usual statistical practitioners should have a potential preference. Numerical studies validate the developed theories, and real data application is provided.

Intensive numerical studies of optimal sufficient dimension reduction with singularity

  • Yoo, Jae Keun;Gwak, Da-Hae;Kim, Min-Sun
    • Communications for Statistical Applications and Methods
    • /
    • 제24권3호
    • /
    • pp.303-315
    • /
    • 2017
  • Yoo (2015, Statistics and Probability Letters, 99, 109-113) derives theoretical results in an optimal sufficient dimension reduction with singular inner-product matrix. The results are promising, but Yoo (2015) only presents one simulation study. So, an evaluation of its practical usefulness is necessary based on numerical studies. This paper studies the asymptotic behaviors of Yoo (2015) through various simulation models and presents a real data example that focuses on ordinary least squares. Intensive numerical studies show that the $x^2$ test by Yoo (2015) outperforms the existing optimal sufficient dimension reduction method. The basis estimation by the former can be theoretically sub-optimal; however, there are no notable differences from that by the latter. This investigation confirms the practical usefulness of Yoo (2015).

비유사도-기반 분류를 위한 차원 축소방법의 비교 실험 (A Comparative Experiment on Dimensional Reduction Methods Applicable for Dissimilarity-Based Classifications)

  • 김상운
    • 전자공학회논문지
    • /
    • 제53권3호
    • /
    • pp.59-66
    • /
    • 2016
  • 이 논문에서는 비유사도-기반 분류(dissimilarity-based classifications: DBC)를 효율적으로 수행할 수 있는 차원 축소 방법들을 비교 평가한 실험 결과를 보고한다. DBC에선 분류를 위해 대상 물체를 측정한 결과 값들(특징 요소들의 집합)을 이용하는 대신에 각 대상 물체들 사이의 비유사도를 측정하여 분류한다. 현재 DBC와 관련된 이슈들 중의 하나는 대규모 데이터를 취급할 경우에 비유사도 공간의 차원이 고차원으로 되는 문제가 있다. 이 문제를 해결하기 위하여 현재 프로토타입 선택(prototype selection: PS)방법이나 차원 축소(dimension reduction: DR)방법을 이용하고 있다. PS는 전체 학습 데이터에서 프로토타입을 추출하여 비유사도 공간을 구성하는 방법이고, DR은 전체 학습 데이터로 먼저 비유사도 공간을 구성한 다음 이 공간의 차원을 축소하는 방법이다. 이 논문에서는 PS이나 DR 대신에, 학습 데이터에 대한 주성분 분석으로 적절한 차원의 고유 공간 (Eigen space: ES)을 구성한 다음, 이 고유 공간으로 매핑 된 벡터들 사이의 $l_p$-놈(norm) 거리를 비유사도 거리로 측정하여 이용하는 DBC를 제안한다. 인터넷에 공개된 인공 및 실세계 데이터를 이용하여 최 근방 이웃 분류규칙으로 ES에서 수행한 DBC의 분류 성능을 측정한 결과, 고유공간의 차원을 적절하게 선정하였을 경우 PS와 DR를 이용한 DBC보다 분류 성능이 더 향상되었음을 확인하였다.

스트리밍 데이터에 대한 최소제곱오차해를 통한 점층적 선형 판별 분석 기법 (Incremental Linear Discriminant Analysis for Streaming Data Using the Minimum Squared Error Solution)

  • 이경훈;박정희
    • 정보과학회 논문지
    • /
    • 제45권1호
    • /
    • pp.69-75
    • /
    • 2018
  • 시간에 따라 순차적으로 들어오는 스트리밍 데이터에서는 전체 데이터 셋을 한꺼번에 모두 이용하는 배치 학습에 기반한 차원축소 기법을 적용하기 어렵다. 따라서 스트리밍 데이터에 적용하기 위한 점층적 차원 감소 방법이 연구되어왔다. 이 논문에서는 최소제곱오차해를 통한 점층적 선형 판별 분석법을 제안한다. 제안 방법은 분산행렬을 직접 구하지 않고 새로 들어오는 샘플의 정보를 이용하여 차원 축소를 위한 사영 방향을 점층적으로 업데이트한다. 실험 결과는 이전에 제안된 점층적 차원축소 알고리즘과 비교하여 이 논문에서 제안한 방법이 더 효과적인 방법임을 입증한다.

Naive Bayes classifiers boosted by sufficient dimension reduction: applications to top-k classification

  • Yang, Su Hyeong;Shin, Seung Jun;Sung, Wooseok;Lee, Choon Won
    • Communications for Statistical Applications and Methods
    • /
    • 제29권5호
    • /
    • pp.603-614
    • /
    • 2022
  • The naive Bayes classifier is one of the most straightforward classification tools and directly estimates the class probability. However, because it relies on the independent assumption of the predictor, which is rarely satisfied in real-world problems, its application is limited in practice. In this article, we propose employing sufficient dimension reduction (SDR) to substantially improve the performance of the naive Bayes classifier, which is often deteriorated when the number of predictors is not restrictively small. This is not surprising as SDR reduces the predictor dimension without sacrificing classification information, and predictors in the reduced space are constructed to be uncorrelated. Therefore, SDR leads the naive Bayes to no longer be naive. We applied the proposed naive Bayes classifier after SDR to build a recommendation system for the eyewear-frames based on customers' face shape, demonstrating its utility in the top-k classification problem.