• Title/Summary/Keyword: high dimensionality

Search Result 176, Processing Time 0.024 seconds

Impact of Instance Selection on kNN-Based Text Categorization

  • Barigou, Fatiha
    • Journal of Information Processing Systems
    • /
    • v.14 no.2
    • /
    • pp.418-434
    • /
    • 2018
  • With the increasing use of the Internet and electronic documents, automatic text categorization becomes imperative. Several machine learning algorithms have been proposed for text categorization. The k-nearest neighbor algorithm (kNN) is known to be one of the best state of the art classifiers when used for text categorization. However, kNN suffers from limitations such as high computation when classifying new instances. Instance selection techniques have emerged as highly competitive methods to improve kNN through data reduction. However previous works have evaluated those approaches only on structured datasets. In addition, their performance has not been examined over the text categorization domain where the dimensionality and size of the dataset is very high. Motivated by these observations, this paper investigates and analyzes the impact of instance selection on kNN-based text categorization in terms of various aspects such as classification accuracy, classification efficiency, and data reduction.

Theoretical-Numerical Modeling of High-Frequency Combustion Instabilities with Linear Waves (선형 고주파 연소불안정의 이론-수치적 예측)

  • Lee, G.Y.;Yoon, W.S.
    • 한국연소학회:학술대회논문집
    • /
    • 2001.11a
    • /
    • pp.125-135
    • /
    • 2001
  • Aiming at a direct, also more realistic, prediction of unstable waves evolving in the combustion chamber, present paper introduces a new analytical method. Instability equations are freshly formulated, and solve the time-integrated ODEs for amplification factors to find the transients of pressure and velocity fluctuations. Present numerical approach requires no separate treatments for nonlinearities. Preliminary numerical experiments for unstable waves in quasi-one-dimensional rocket combustor, show validity and applicability of present model, and promise for its practical use. Study for the complex models for physics, especially velocity- and pressure-coupled responses, and inclusion of multi dimensionality remains as future tasks.

  • PDF

Quantitative Analysis for Plasma Etch Modeling Using Optical Emission Spectroscopy: Prediction of Plasma Etch Responses

  • Jeong, Young-Seon;Hwang, Sangheum;Ko, Young-Don
    • Industrial Engineering and Management Systems
    • /
    • v.14 no.4
    • /
    • pp.392-400
    • /
    • 2015
  • Monitoring of plasma etch processes for fault detection is one of the hallmark procedures in semiconductor manufacturing. Optical emission spectroscopy (OES) has been considered as a gold standard for modeling plasma etching processes for on-line diagnosis and monitoring. However, statistical quantitative methods for processing the OES data are still lacking. There is an urgent need for a statistical quantitative method to deal with high-dimensional OES data for improving the quality of etched wafers. Therefore, we propose a robust relevance vector machine (RRVM) for regression with statistical quantitative features for modeling etch rate and uniformity in plasma etch processes by using OES data. For effectively dealing with the OES data complexity, we identify seven statistical features for extraction from raw OES data by reducing the data dimensionality. The experimental results demonstrate that the proposed approach is more suitable for high-accuracy monitoring of plasma etch responses obtained from OES.

Fast Pedestrian Detection Using Histogram of Oriented Gradients and Principal Components Analysis

  • Nguyen, Trung Quy;Kim, Soo Hyung;Na, In Seop
    • International Journal of Contents
    • /
    • v.9 no.3
    • /
    • pp.1-9
    • /
    • 2013
  • In this paper, we propose a fast and accurate system for detecting pedestrians from a static image. Histogram of Oriented Gradients (HOG) is a well-known feature for pedestrian detection systems but extracting HOG is expensive due to its high dimensional vector. It will cause long processing time and large memory consumption in case of making a pedestrian detection system on high resolution image or video. In order to deal with this problem, we use Principal Components Analysis (PCA) technique to reduce the dimensionality of HOG. The output of PCA will be input for a linear SVM classifier for learning and testing. The experiment results showed that our proposed method reduces processing time but still maintains the similar detection rate. We got twenty five times faster than original HOG feature.

Negative binomial loglinear mixed models with general random effects covariance matrix

  • Sung, Youkyung;Lee, Keunbaik
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.1
    • /
    • pp.61-70
    • /
    • 2018
  • Modeling of the random effects covariance matrix in generalized linear mixed models (GLMMs) is an issue in analysis of longitudinal categorical data because the covariance matrix can be high-dimensional and its estimate must satisfy positive-definiteness. To satisfy these constraints, we consider the autoregressive and moving average Cholesky decomposition (ARMACD) to model the covariance matrix. The ARMACD creates a more flexible decomposition of the covariance matrix that provides generalized autoregressive parameters, generalized moving average parameters, and innovation variances. In this paper, we analyze longitudinal count data with overdispersion using GLMMs. We propose negative binomial loglinear mixed models to analyze longitudinal count data and we also present modeling of the random effects covariance matrix using the ARMACD. Epilepsy data are analyzed using our proposed model.

Efficient estimation and variable selection for partially linear single-index-coefficient regression models

  • Kim, Young-Ju
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.1
    • /
    • pp.69-78
    • /
    • 2019
  • A structured model with both single-index and varying coefficients is a powerful tool in modeling high dimensional data. It has been widely used because the single-index can overcome the curse of dimensionality and varying coefficients can allow nonlinear interaction effects in the model. For high dimensional index vectors, variable selection becomes an important question in the model building process. In this paper, we propose an efficient estimation and a variable selection method based on a smoothing spline approach in a partially linear single-index-coefficient regression model. We also propose an efficient algorithm for simultaneously estimating the coefficient functions in a data-adaptive lower-dimensional approximation space and selecting significant variables in the index with the adaptive LASSO penalty. The empirical performance of the proposed method is illustrated with simulated and real data examples.

High-resolution 1H NMR Spectroscopy of Green and Black Teas

  • Jeong, Ji-Ho;Jang, Hyun-Jun;Kim, Yongae
    • Journal of the Korean Chemical Society
    • /
    • v.63 no.2
    • /
    • pp.78-84
    • /
    • 2019
  • High-resolution $^1H$ NMR spectroscopic technique has been widely used as one of the most powerful analytical tools in food chemistry as well as to define molecular structure. The $^1H$ NMR spectra-based metabolomics has focused on classification and chemometric analysis of complex mixtures. The principal component analysis (PCA), an unsupervised clustering method and used to reduce the dimensionality of multivariate data, facilitates direct peak quantitation and pattern recognition. Using a combination of these techniques, the various green teas and black teas brewed were investigated via metabolite profiling. These teas were characterized based on the leaf size and country of cultivation, respectively.

On Optimizing LDA-extentions Using a Pre-Clustering (사전 클러스터링을 이용한 LDA-확장법들의 최적화)

  • Kim, Sang-Woon;Koo, Byum-Yong;Choi, Woo-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.3
    • /
    • pp.98-107
    • /
    • 2007
  • For high-dimensional pattern recognition, such as face classification, the small number of training samples leads to the Small Sample Size problem when the number of pattern samples is smaller than the number of dimensionality. Recently, various LDA-extensions have been developed, including LDA, PCA+LDA, and Direct-LDA, to address the problem. This paper proposes a method of improving the classification efficiency by increasing the number of (sub)-classes through pre-clustering a training set prior to the execution of Direct-LDA. In LDA (or Direct-LDA), since the number of classes of the training set puts a limit to the dimensionality to be reduced, it is increased to the number of sub-classes that is obtained through clustering so that the classification performance of LDA-extensions can be improved. In other words, the eigen space of the training set consists of the range space and the null space, and the dimensionality of the range space increases as the number of classes increases. Therefore, when constructing the transformation matrix, through minimizing the null space, the loss of discriminatve information resulted from this space can be minimized. Experimental results for the artificial data of X-OR samples as well as the bench mark face databases of AT&T and Yale demonstrate that the classification efficiency of the proposed method could be improved.

An extension of multifactor dimensionality reduction method for detecting gene-gene interactions with the survival time (생존시간과 연관된 유전자 간의 교호작용에 관한 다중차원축소방법의 확장)

  • Oh, Jin Seok;Lee, Seung Yeoun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.5
    • /
    • pp.1057-1067
    • /
    • 2014
  • Many genetic variants have been identified to be associated with complex diseases such as hypertension, diabetes and cancers throughout genome-wide association studies (GWAS). However, there still exist a serious missing heritability problem since the proportion explained by genetic variants from GWAS is very weak less than 10~15%. Gene-gene interaction study may be helpful to explain the missing heritability because most of complex disease mechanisms are involved with more than one single SNP, which include multiple SNPs or gene-gene interactions. This paper focuses on gene-gene interactions with the survival phenotype by extending the multifactor dimensionality reduction (MDR) method to the accelerated failure time (AFT) model. The standardized residual from AFT model is used as a residual score for classifying multiple geno-types into high and low risk groups and algorithm of MDR is implemented. We call this method AFT-MDR and compares the power of AFT-MDR with those of Surv-MDR and Cox-MDR in simulation studies. Also a real data for leukemia Korean patients is analyzed. It was found that the power of AFT-MDR is greater than that of Surv-MDR and is comparable with that of Cox-MDR, but is very sensitive to the censoring fraction.

A Comparative Experiment on Dimensional Reduction Methods Applicable for Dissimilarity-Based Classifications (비유사도-기반 분류를 위한 차원 축소방법의 비교 실험)

  • Kim, Sang-Woon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.3
    • /
    • pp.59-66
    • /
    • 2016
  • This paper presents an empirical evaluation on dimensionality reduction strategies by which dissimilarity-based classifications (DBC) can be implemented efficiently. In DBC, classification is not based on feature measurements of individual objects (a set of attributes), but rather on a suitable dissimilarity measure among the individual objects (pair-wise object comparisons). One problem of DBC is the high dimensionality of the dissimilarity space when a lots of objects are treated. To address this issue, two kinds of solutions have been proposed in the literature: prototype selection (PS)-based methods and dimension reduction (DR)-based methods. In this paper, instead of utilizing the PS-based or DR-based methods, a way of performing DBC in Eigen spaces (ES) is considered and empirically compared. In ES-based DBC, classifications are performed as follows: first, a set of principal eigenvectors is extracted from the training data set using a principal component analysis; second, an Eigen space is expanded using a subset of the extracted and selected Eigen vectors; third, after measuring distances among the projected objects in the Eigen space using $l_p$-norms as the dissimilarity, classification is performed. The experimental results, which are obtained using the nearest neighbor rule with artificial and real-life benchmark data sets, demonstrate that when the dimensionality of the Eigen spaces has been selected appropriately, compared to the PS-based and DR-based methods, the performance of the ES-based DBC can be improved in terms of the classification accuracy.