• Title/Summary/Keyword: dimension reduction method

Search Result 250, Processing Time 0.026 seconds

Clustering Algorithm for Time Series with Similar Shapes

  • Ahn, Jungyu;Lee, Ju-Hong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.3112-3127
    • /
    • 2018
  • Since time series clustering is performed without prior information, it is used for exploratory data analysis. In particular, clusters of time series with similar shapes can be used in various fields, such as business, medicine, finance, and communications. However, existing time series clustering algorithms have a problem in that time series with different shapes are included in the clusters. The reason for such a problem is that the existing algorithms do not consider the limitations on the size of the generated clusters, and use a dimension reduction method in which the information loss is large. In this paper, we propose a method to alleviate the disadvantages of existing methods and to find a better quality of cluster containing similarly shaped time series. In the data preprocessing step, we normalize the time series using z-transformation. Then, we use piecewise aggregate approximation (PAA) to reduce the dimension of the time series. In the clustering step, we use density-based spatial clustering of applications with noise (DBSCAN) to create a precluster. We then use a modified K-means algorithm to refine the preclusters containing differently shaped time series into subclusters containing only similarly shaped time series. In our experiments, our method showed better results than the existing method.

A Compensation Scheme of Frequency Selective IQ Mismatch for Radar Systems (레이더 시스템을 위한 주파수 선택적 IQ 불일치 보상 기법)

  • Ryu, Yeongbin;Heo, Je;Son, Jaehyun;Choi, Mungak;Oh, Hyukjun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.4
    • /
    • pp.565-571
    • /
    • 2021
  • In this paper, a compensation scheme of frequency selective IQ mismatch for high-performance radar systems based on commercial RFIC's is proposed. Besides, an optimization model and its solution based on the dimension reduction scheme using singular value decomposition are also proposed to design the optimal IQ mismatch compensation digital filter with complex coefficients. The performance of the proposed method had been analyzed through experiments using the IQ mismatch measurement and compensation system implemented on an FPGA board with a target RFIC and compared with the previous method. The experiment result showed a performance improvement of the proposed method over the existing one without noticeable increments in complexities. These performance analysis results showed that the limitation of using commercial RFIC's in high-performance radar systems due to the undesirable maximum SNR cap caused by their IQ mismatches could be overcome by employing the proposed method.

Stochastic System Reduction and Control via Component Cost Analysis (구성요소치 해석을 이용한 확률계의 축소와 제어)

  • Chae, Kyo-Soon;Lee, Dong-Hee;Park, Sung-Man;Yeo, Un-Kyung;Cho, Yun-Hyun;Heo, Hoon
    • Proceedings of the KSME Conference
    • /
    • 2007.05a
    • /
    • pp.921-926
    • /
    • 2007
  • A dynamic system under random disturbance is considered in the study. In order to control the system efficiently, proper reduction of system dimension is indispensible in design stage. The reduction method using component cost analysis in conjunction with stochastic analysis is proposed for the control of a system. System response is obtained in terms of dynamic moment equation via Fokker-Plank-Kolmogorov(F-P-K) equation. The dynamic moment response of the system under random disturbance are reduced by using of deterministic version of component cost analysis. The reduced system via proposed "stochastic component cost analysis" is successfully implemented for dynamic response and shows remarkable control performance effectively utilizing "stochastic controller" in physical time domain.

  • PDF

Feature reduction for classifying high dimensional data sets using support vector machine (고차원 데이터의 분류를 위한 서포트 벡터 머신을 이용한 피처 감소 기법)

  • Ko, Seok-Ha;Lee, Hyun-Ju
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.877-878
    • /
    • 2008
  • We suggest a feature reduction method to classify mouse function data sets, which integrate several biological data sets represented as high dimensional vectors. To increase classification accuracy and decrease computational overhead, it is important to reduce the dimension of features. To do this, we employed Hybrid Huberized Support Vector Machine with kernels used for a kernel logistic regression method. When compared to support vector machine, this a pproach shows the better accuracy with useful features for each mouse function.

  • PDF

Zeroth-Order Shear Deformation Micro-Mechanical Model for Periodic Heterogeneous Beam-like Structures

  • Lee, Chang-Yong
    • Journal of Power System Engineering
    • /
    • v.19 no.3
    • /
    • pp.55-62
    • /
    • 2015
  • This paper discusses a new model for investigating the micro-mechanical behavior of beam-like structures composed of various elastic moduli and complex geometries varying through the cross-sectional directions and also periodically-repeated along the axial directions. The original three-dimensional problem is first formulated in an unified and compact intrinsic form using the concept of decomposition of the rotation tensor. Taking advantage of two smallness of the cross-sectional dimension-to-length parameter and the micro-to-macro heterogeneity and performing homogenization along dimensional reduction simultaneously, the variational asymptotic method is used to rigorously construct an effective zeroth-order beam model, which is similar a generalized Timoshenko one (the first-order shear deformation model) capable of capturing the transverse shear deformations, but still carries out the zeroth-order approximation which can maximize simplicity and promote efficiency. Two examples available in literature are used to demonstrate the consistence and efficiency of this new model, especially for the structures, in which the effects of transverse shear deformations are significant.

User-Item Matrix Reduction Technique for Personalized Recommender Systems (개인화 된 추천시스템을 위한 사용자-상품 매트릭스 축약기법)

  • Kim, Kyoung-Jae;Ahn, Hyun-Chul
    • Journal of Information Technology Applications and Management
    • /
    • v.16 no.1
    • /
    • pp.97-113
    • /
    • 2009
  • Collaborative filtering(CF) has been a very successful approach for building recommender system, but its widespread use has exposed to some well-known problems including sparsity and scalability problems. In order to mitigate these problems, we propose two novel models for improving the typical CF algorithm, whose names are ISCF(Item-Selected CF) and USCF(User-Selected CF). The modified models of the conventional CF method that condense the original dataset by reducing a dimension of items or users in the user-item matrix may improve the prediction accuracy as well as the efficiency of the conventional CF algorithm. As a tool to optimize the reduction of a user-item matrix, our study proposes genetic algorithms. We believe that our approach may relieve the sparsity and scalability problems. To validate the applicability of ISCF and USCF, we applied them to the MovieLens dataset. Experimental results showed that both the efficiency and the accuracy were enhanced in our proposed models.

  • PDF

A Novel Approach of Feature Extraction for Analog Circuit Fault Diagnosis Based on WPD-LLE-CSA

  • Wang, Yuehai;Ma, Yuying;Cui, Shiming;Yan, Yongzheng
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.6
    • /
    • pp.2485-2492
    • /
    • 2018
  • The rapid development of large-scale integrated circuits has brought great challenges to the circuit testing and diagnosis, and due to the lack of exact fault models, inaccurate analog components tolerance, and some nonlinear factors, the analog circuit fault diagnosis is still regarded as an extremely difficult problem. To cope with the problem that it's difficult to extract fault features effectively from masses of original data of the nonlinear continuous analog circuit output signal, a novel approach of feature extraction and dimension reduction for analog circuit fault diagnosis based on wavelet packet decomposition, local linear embedding algorithm, and clone selection algorithm (WPD-LLE-CSA) is proposed. The proposed method can identify faulty components in complicated analog circuits with a high accuracy above 99%. Compared with the existing feature extraction methods, the proposed method can significantly reduce the quantity of features with less time spent under the premise of maintaining a high level of diagnosing rate, and also the ratio of dimensionality reduction was discussed. Several groups of experiments are conducted to demonstrate the efficiency of the proposed method.

An effective filtering for noise smoothing using the area information of 3D mesh (3차원 메쉬의 면적 정보를 이용한 효과적인 잡음 제거)

  • Hyeon, Dae-Hwan;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.2 s.314
    • /
    • pp.55-62
    • /
    • 2007
  • This paper proposes method to get exquisite third dimension data removing included noise by error that occur in third dimension reconstruction through camera auto-calibration. Though reconstructing third dimension data by previous noise removing method, mesh that area is wide is happened problem by noise. Because mesh's area is important, the proposed algorithm need preprocessing that remove unnecessary triangle meshes of acquired third dimension data. The research analyzes the characteristics of noise using the area information of 3-dimensional meshes, separates a peek noise and a Gauss noise by its characteristics and removes the noise effectively. We give a quantitative evaluation of the proposed preprocessing filter and compare with the mesh smoothing procedures. We demonstrate that our effective preprocessing filter outperform the mesh smoothing procedures in terms of accuracy and resistance to over-smoothing.

Probabilistic penalized principal component analysis

  • Park, Chongsun;Wang, Morgan C.;Mo, Eun Bi
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.2
    • /
    • pp.143-154
    • /
    • 2017
  • A variable selection method based on probabilistic principal component analysis (PCA) using penalized likelihood method is proposed. The proposed method is a two-step variable reduction method. The first step is based on the probabilistic principal component idea to identify principle components. The penalty function is used to identify important variables in each component. We then build a model on the original data space instead of building on the rotated data space through latent variables (principal components) because the proposed method achieves the goal of dimension reduction through identifying important observed variables. Consequently, the proposed method is of more practical use. The proposed estimators perform as the oracle procedure and are root-n consistent with a proper choice of regularization parameters. The proposed method can be successfully applied to high-dimensional PCA problems with a relatively large portion of irrelevant variables included in the data set. It is straightforward to extend our likelihood method in handling problems with missing observations using EM algorithms. Further, it could be effectively applied in cases where some data vectors exhibit one or more missing values at random.

Multivariate pHd analysis (다변량 pHd 분석)

  • 이용구
    • The Korean Journal of Applied Statistics
    • /
    • v.8 no.1
    • /
    • pp.61-74
    • /
    • 1995
  • These days, many kinds of graphical methods have been developed, and it is possible to get information directly from data. Especially, R-code (Cook and Weisberg, 1994) make it possible to draw various kinds of two and three dimensional plots, and to rotate the axis of the plots. But the maximum dimensional of the plot is three, so we can not draw plot of one response variable with more than three explanatory variables. Li(1991, 1992) has developed a method to reduce the dimension of the explanatory variables, so it is possible to draw lower dimensional plots to get information of the full explanatory variables. One of the dimension reduction method developed by Li is pHd. In this paper, we have tried to apply the pHd method for the model with multivariate response.

  • PDF