• Title/Summary/Keyword: input dimension reduction

Search Result 39, Processing Time 0.022 seconds

Input Dimension Reduction based on Continuous Word Vector for Deep Neural Network Language Model (Deep Neural Network 언어모델을 위한 Continuous Word Vector 기반의 입력 차원 감소)

  • Kim, Kwang-Ho;Lee, Donghyun;Lim, Minkyu;Kim, Ji-Hwan
    • Phonetics and Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.3-8
    • /
    • 2015
  • In this paper, we investigate an input dimension reduction method using continuous word vector in deep neural network language model. In the proposed method, continuous word vectors were generated by using Google's Word2Vec from a large training corpus to satisfy distributional hypothesis. 1-of-${\left|V\right|}$ coding discrete word vectors were replaced with their corresponding continuous word vectors. In our implementation, the input dimension was successfully reduced from 20,000 to 600 when a tri-gram language model is used with a vocabulary of 20,000 words. The total amount of time in training was reduced from 30 days to 14 days for Wall Street Journal training corpus (corpus length: 37M words).

Design of Regression Model and Pattern Classifier by Using Principal Component Analysis (주성분 분석법을 이용한 회귀다항식 기반 모델 및 패턴 분류기 설계)

  • Roh, Seok-Beom;Lee, Dong-Yoon
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.6
    • /
    • pp.594-600
    • /
    • 2017
  • The new design methodology of prediction model and pattern classification, which is based on the dimension reduction algorithm called principal component analysis, is introduced in this paper. Principal component analysis is one of dimension reduction techniques which are used to reduce the dimension of the input space and extract some good features from the original input variables. The extracted input variables are applied to the prediction model and pattern classifier as the input variables. The introduced prediction model and pattern classifier are based on the very simple regression which is the key point of the paper. The structural simplicity of the prediction model and pattern classifier leads to reducing the over-fitting problem. In order to validate the proposed prediction model and pattern classifier, several machine learning data sets are used.

Performance Improvement of Polynomial Adaline by Using Dimension Reduction of Independent Variables (독립변수의 차원감소에 의한 Polynomial Adaline의 성능개선)

  • Cho, Yong-Hyun
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.5 no.1
    • /
    • pp.33-38
    • /
    • 2002
  • This paper proposes an efficient method for improving the performance of polynomial adaline using the dimension reduction of independent variables. The adaptive principal component analysis is applied for reducing the dimension by extracting efficiently the features of the given independent variables. It can be solved the problems due to high dimensional input data in the polynomial adaline that the principal component analysis converts input data into set of statistically independent features. The proposed polynomial adaline has been applied to classify the patterns. The simulation results shows that the proposed polynomial adaline has better performances of the classification for test patterns, in comparison with those using the conventional polynomial adaline. Also, it is affected less by the scope of the smoothing factor.

  • PDF

DECOUPLING OF MULTI-INPUT MULTI-OUTPYT TWO DIMENSIONAL SYSTEMS

  • Kawakami, Atsushi
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1990.10b
    • /
    • pp.1130-1134
    • /
    • 1990
  • In this paper, we propose a mthod to decouple the multi-input multi-output two-dimensional system. Then, we analyze the realization dimension of the feedback, feedforward given to decouple. Moreover, we consider the possibility of the reduction of the dynamical dimension needed to decouple. Besides, in order to stabilize the decoupled two-dimensional system, we suggest a method to assign the poles of each entry of the transfer function matrix to the desired positions.

  • PDF

Kriging Dimension Reduction Method for Reliability Analysis in Spring Design (스프링 설계문제의 신뢰도 해석을 위한 크리깅 기반 차원감소법의 활용)

  • Gang, Jin-Hyuk;An, Da-Wn;Won, Jun-Ho;Choi, Joo-Ho
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2008.04a
    • /
    • pp.422-427
    • /
    • 2008
  • This study is to illustrate the usefulness of Kriging Dimension Reduction Method(KDRM), which is to construct probability distribution of response function in the presence of the physical uncertainty of input variables. DRM has recently received increased attention due to its sensitivity-free nature and efficiency that considerable accuracy is obtained with only a few number of analyses. However, the DRM has a number of drawbacks such as instability and inaccuracy for functions with increased nonlinearity. As a remedy, Kriging interpolation technique is incorporated which is known as more accurate for nonlinear functions. The KDRM is applied and compared with MCS methods in a compression coil spring design problem. The effectiveness and accuracy of this method is verified.

  • PDF

Iterative MIMO Reception Based on Low Complexity Soft Detection (저연산 연판정 기반의 다중 안테나 반복검출 기법)

  • Shin, Sang-Sik;Choi, Ji-Woong
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.61-66
    • /
    • 2013
  • In this paper, we propose an iterative soft dimension reduction based multi-input multi-output (MIMO) detection for coded spatial multiplexing system. In spite of better performance of iterative MIMO detection, its computational complexity gives a significant burden to the receivers. To mitigate this problem, we propose a scheme employing all ordering successive interference cancellation (AOSIC) for hard-decision detection and dimension reduction soft demodulator (DRSD) with iterative decoding for soft-decision detectors, respectively. This scheme can reduce complexity of iterative soft MIMO detection and provide better performance than other conventional detectors.

Resistant Singular Value Decomposition and Its Statistical Applications

  • Park, Yong-Seok;Huh, Myung-Hoe
    • Journal of the Korean Statistical Society
    • /
    • v.25 no.1
    • /
    • pp.49-66
    • /
    • 1996
  • The singular value decomposition is one of the most useful methods in the area of matrix computation. It gives dimension reduction which is the centeral idea in many multivariate analyses. But this method is not resistant, i.e., it is very sensitive to small changes in the input data. In this article, we derive the resistant version of singular value decomposition for principal component analysis. And we give its statistical applications to biplot which is similar to principal component analysis in aspects of the dimension reduction of an n x p data matrix. Therefore, we derive the resistant principal component analysis and biplot based on the resistant singular value decomposition. They provide graphical multivariate data analyses relatively little influenced by outlying observations.

  • PDF

Effect of Dimension Reduction on Prediction Performance of Multivariate Nonlinear Time Series

  • Jeong, Jun-Yong;Kim, Jun-Seong;Jun, Chi-Hyuck
    • Industrial Engineering and Management Systems
    • /
    • v.14 no.3
    • /
    • pp.312-317
    • /
    • 2015
  • The dynamic system approach in time series has been used in many real problems. Based on Taken's embedding theorem, we can build the predictive function where input is the time delay coordinates vector which consists of the lagged values of the observed series and output is the future values of the observed series. Although the time delay coordinates vector from multivariate time series brings more information than the one from univariate time series, it can exhibit statistical redundancy which disturbs the performance of the prediction function. We apply dimension reduction techniques to solve this problem and analyze the effect of this approach for prediction. Our experiment uses delayed Lorenz series; least squares support vector regression approximates the predictive function. The result shows that linearly preserving projection improves the prediction performance.

Feature Analysis of Multi-Channel Time Series EEG Based on Incremental Model (점진적 모델에 기반한 다채널 시계열 데이터 EEG의 특징 분석)

  • Kim, Sun-Hee;Yang, Hyung-Jeong;Ng, Kam Swee;Jeong, Jong-Mun
    • The KIPS Transactions:PartB
    • /
    • v.16B no.1
    • /
    • pp.63-70
    • /
    • 2009
  • BCI technology is to control communication systems or machines by brain signal among biological signals followed by signal processing. For the implementation of BCI systems, it is required that the characteristics of brain signal are learned and analyzed in real-time and the learned characteristics are applied. In this paper, we detect feature vector of EEG signal on left and right hand movements based on incremental approach and dimension reduction using the detected feature vector. In addition, we show that the reduced dimension can improve the classification performance by removing unnecessary features. The processed data including sufficient features of input data can reduce the time of processing and boost performance of classification by removing unwanted features. Our experiments using K-NN classifier show the proposed approach 5% outperforms the PCA based dimension reduction.

Line-Segment Feature Analysis Algorithm for Handwritten-Digits Data Reduction (필기체 숫자 데이터 차원 감소를 위한 선분 특징 분석 알고리즘)

  • Kim, Chang-Min;Lee, Woo-Beom
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.4
    • /
    • pp.125-132
    • /
    • 2021
  • As the layers of artificial neural network deepens, and the dimension of data used as an input increases, there is a problem of high arithmetic operation requiring a lot of arithmetic operation at a high speed in the learning and recognition of the neural network (NN). Thus, this study proposes a data dimensionality reduction method to reduce the dimension of the input data in the NN. The proposed Line-segment Feature Analysis (LFA) algorithm applies a gradient-based edge detection algorithm using median filters to analyze the line-segment features of the objects existing in an image. Concerning the extracted edge image, the eigenvalues corresponding to eight kinds of line-segment are calculated, using 3×3 or 5×5-sized detection filters consisting of the coefficient values, including [0, 1, 2, 4, 8, 16, 32, 64, and 128]. Two one-dimensional 256-sized data are produced, accumulating the same response values from the eigenvalue calculated with each detection filter, and the two data elements are added up. Two LFA256 data are merged to produce 512-sized LAF512 data. For the performance evaluation of the proposed LFA algorithm to reduce the data dimension for the recognition of handwritten numbers, as a result of a comparative experiment, using the PCA technique and AlexNet model, LFA256 and LFA512 showed a recognition performance respectively of 98.7% and 99%.