• Title/Summary/Keyword: Subset selection

Search Result 203, Processing Time 0.027 seconds

A Novel Image Classification Method for Content-based Image Retrieval via a Hybrid Genetic Algorithm and Support Vector Machine Approach

  • Seo, Kwang-Kyu
    • Journal of the Semiconductor & Display Technology
    • /
    • v.10 no.3
    • /
    • pp.75-81
    • /
    • 2011
  • This paper presents a novel method for image classification based on a hybrid genetic algorithm (GA) and support vector machine (SVM) approach which can significantly improve the classification performance for content-based image retrieval (CBIR). Though SVM has been widely applied to CBIR, it has some problems such as the kernel parameters setting and feature subset selection of SVM which impact the classification accuracy in the learning process. This study aims at simultaneously optimizing the parameters of SVM and feature subset without degrading the classification accuracy of SVM using GA for CBIR. Using the hybrid GA and SVM model, we can classify more images in the database effectively. Experiments were carried out on a large-size database of images and experiment results show that the classification accuracy of conventional SVM may be improved significantly by using the proposed model. We also found that the proposed model outperformed all the other models such as neural network and typical SVM models.

Feature Selection via Embedded Learning Based on Tangent Space Alignment for Microarray Data

  • Ye, Xiucai;Sakurai, Tetsuya
    • Journal of Computing Science and Engineering
    • /
    • v.11 no.4
    • /
    • pp.121-129
    • /
    • 2017
  • Feature selection has been widely established as an efficient technique for microarray data analysis. Feature selection aims to search for the most important feature/gene subset of a given dataset according to its relevance to the current target. Unsupervised feature selection is considered to be challenging due to the lack of label information. In this paper, we propose a novel method for unsupervised feature selection, which incorporates embedded learning and $l_{2,1}-norm$ sparse regression into a framework to select genes in microarray data analysis. Local tangent space alignment is applied during embedded learning to preserve the local data structure. The $l_{2,1}-norm$ sparse regression acts as a constraint to aid in learning the gene weights correlatively, by which the proposed method optimizes for selecting the informative genes which better capture the interesting natural classes of samples. We provide an effective algorithm to solve the optimization problem in our method. Finally, to validate the efficacy of the proposed method, we evaluate the proposed method on real microarray gene expression datasets. The experimental results demonstrate that the proposed method obtains quite promising performance.

Transmit Antenna Selection for Dual Polarized Channel Using Singular Value Decision

  • Lee Sang-yub;Mun Cheol;Yook Jong-gwan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.9A
    • /
    • pp.788-794
    • /
    • 2005
  • In this paper, we focus on the potential of dual polarized antennas in mobile system. thus, this paper designs exact dual polarized channel with Spatial Channel Model (SCM) and investigates the performance for certain environment. Using proposed the channel model; we know estimates of the channel capacity as a function of cross polarization discrimination (XPD) and spatial fading correlation. It is important that the MIMO channel matrix consists of Kronecker product dividable spatial and polarized channel. Through the channel characteristics, we propose an algorithm for the adaptation of transmit antenna configuration to time varying propagation environments. The optimal active transmit antenna subset is determined with equal power allocated to the active transmit antennas, assuming no feedback information on types of the selected antennas. We first consider a heuristic decision strategy in which the optimal active transmit antenna subset and its system capacity are determined such that the transmission data rate is maximized among all possible types. This paper then proposes singular values decision procedure consisting of Kronecker product with spatial and polarize channel. This method of singular value decision, which the first channel environments is determined using singular values of spatial channel part which is made of environment parameters and distance between antennas. level of correlation. Then we will select antenna which have various polarization type. After spatial channel structure is decided, we contact polarization types which have considerable cases It is note that the proposed algorithms and analysis of dual polarized channel using SCM (Spatial Channel Model) optimize channel capacity and reduce the number of transmit antenna selection compare to heuristic method which has considerable 100 cases.

Classification Performance Improvement of UNSW-NB15 Dataset Based on Feature Selection (특징선택 기법에 기반한 UNSW-NB15 데이터셋의 분류 성능 개선)

  • Lee, Dae-Bum;Seo, Jae-Hyun
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.5
    • /
    • pp.35-42
    • /
    • 2019
  • Recently, as the Internet and various wearable devices have appeared, Internet technology has contributed to obtaining more convenient information and doing business. However, as the internet is used in various parts, the attack surface points that are exposed to attacks are increasing, Attempts to invade networks aimed at taking unfair advantage, such as cyber terrorism, are also increasing. In this paper, we propose a feature selection method to improve the classification performance of the class to classify the abnormal behavior in the network traffic. The UNSW-NB15 dataset has a rare class imbalance problem with relatively few instances compared to other classes, and an undersampling method is used to eliminate it. We use the SVM, k-NN, and decision tree algorithms and extract a subset of combinations with superior detection accuracy and RMSE through training and verification. The subset has recall values of more than 98% through the wrapper based experiments and the DT_PSO showed the best performance.

GA-SVM Ensemble 모델에서의 accuracy와 diversity를 고려한 feature subset population 선택

  • Seong, Gi-Seok;Jo, Seong-Jun
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2005.05a
    • /
    • pp.614-620
    • /
    • 2005
  • Ensemble에서 feature selection은 각 classifier의 학습할 데이터의 변수를 다르게 하여 diversity를 높이며, 이것은 일반적인 성능향상을 가져온다. Feature selection을 할 때 쓰는 방법 중의 하나가 Genetic Algorithm (GA)이며, GA-SVM은 GA를 기본으로 한 wrapper based feature selection mechanism으로 response model과 keystroke dynamics identity verification model을 만들 때 좋은 성능을 보였다. 하지만 population 안의 후보들간의 diversity를 보장해주지 못한다는 단점 때문에 classifier들의 accuracy와 diversity의 균형을 맞추기 위한 heuristic parameter setting이 존재하며 이를 조정해야만 하였다. 우리는 GA-SVM 알고리즘을 바탕으로, population안 후보들의 fitness를 측정할 때 accuracy와 diversity 둘 다 고려하는 fitness function을 도입하여 추가적인 classifier 선택 작업을 제거하면서 성능을 유지시키는 방안을 연구하였으며 결과적으로 알고리즘의 복잡성을 줄이면서도 모델의 성능을 유지시켰다.

  • PDF

Variable selection in censored kernel regression

  • Choi, Kook-Lyeol;Shim, Jooyong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.1
    • /
    • pp.201-209
    • /
    • 2013
  • For censored regression, it is often the case that some input variables are not important, while some input variables are more important than others. We propose a novel algorithm for selecting such important input variables for censored kernel regression, which is based on the penalized regression with the weighted quadratic loss function for the censored data, where the weight is computed from the empirical survival function of the censoring variable. We employ the weighted version of ANOVA decomposition kernels to choose optimal subset of important input variables. Experimental results are then presented which indicate the performance of the proposed variable selection method.

Performance Evaluation of a Feature-Importance-based Feature Selection Method for Time Series Prediction

  • Hyun, Ahn
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.1
    • /
    • pp.82-89
    • /
    • 2023
  • Various machine-learning models may yield high predictive power for massive time series for time series prediction. However, these models are prone to instability in terms of computational cost because of the high dimensionality of the feature space and nonoptimized hyperparameter settings. Considering the potential risk that model training with a high-dimensional feature set can be time-consuming, we evaluate a feature-importance-based feature selection method to derive a tradeoff between predictive power and computational cost for time series prediction. We used two machine learning techniques for performance evaluation to generate prediction models from a retail sales dataset. First, we ranked the features using impurity- and Local Interpretable Model-agnostic Explanations (LIME) -based feature importance measures in the prediction models. Then, the recursive feature elimination method was applied to eliminate unimportant features sequentially. Consequently, we obtained a subset of features that could lead to reduced model training time while preserving acceptable model performance.

Performance Improvement of Feature Selection Methods based on Bio-Inspired Algorithms (생태계 모방 알고리즘 기반 특징 선택 방법의 성능 개선 방안)

  • Yun, Chul-Min;Yang, Ji-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.15B no.4
    • /
    • pp.331-340
    • /
    • 2008
  • Feature Selection is one of methods to improve the classification accuracy of data in the field of machine learning. Many feature selection algorithms have been proposed and discussed for years. However, the problem of finding the optimal feature subset from full data still remains to be a difficult problem. Bio-inspired algorithms are well-known evolutionary algorithms based on the principles of behavior of organisms, and very useful methods to find the optimal solution in optimization problems. Bio-inspired algorithms are also used in the field of feature selection problems. So in this paper we proposed new improved bio-inspired algorithms for feature selection. We used well-known bio-inspired algorithms, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), to find the optimal subset of features that shows the best performance in classification accuracy. In addition, we modified the bio-inspired algorithms considering the prior importance (prior relevance) of each feature. We chose the mRMR method, which can measure the goodness of single feature, to set the prior importance of each feature. We modified the evolution operators of GA and PSO by using the prior importance of each feature. We verified the performance of the proposed methods by experiment with datasets. Feature selection methods using GA and PSO produced better performances in terms of the classification accuracy. The modified method with the prior importance demonstrated improved performances in terms of the evolution speed and the classification accuracy.

Optimization of Support Vector Machines for Financial Forecasting (재무예측을 위한 Support Vector Machine의 최적화)

  • Kim, Kyoung-Jae;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.241-254
    • /
    • 2011
  • Financial time-series forecasting is one of the most important issues because it is essential for the risk management of financial institutions. Therefore, researchers have tried to forecast financial time-series using various data mining techniques such as regression, artificial neural networks, decision trees, k-nearest neighbor etc. Recently, support vector machines (SVMs) are popularly applied to this research area because they have advantages that they don't require huge training data and have low possibility of overfitting. However, a user must determine several design factors by heuristics in order to use SVM. For example, the selection of appropriate kernel function and its parameters and proper feature subset selection are major design factors of SVM. Other than these factors, the proper selection of instance subset may also improve the forecasting performance of SVM by eliminating irrelevant and distorting training instances. Nonetheless, there have been few studies that have applied instance selection to SVM, especially in the domain of stock market prediction. Instance selection tries to choose proper instance subsets from original training data. It may be considered as a method of knowledge refinement and it maintains the instance-base. This study proposes the novel instance selection algorithm for SVMs. The proposed technique in this study uses genetic algorithm (GA) to optimize instance selection process with parameter optimization simultaneously. We call the model as ISVM (SVM with Instance selection) in this study. Experiments on stock market data are implemented using ISVM. In this study, the GA searches for optimal or near-optimal values of kernel parameters and relevant instances for SVMs. This study needs two sets of parameters in chromosomes in GA setting : The codes for kernel parameters and for instance selection. For the controlling parameters of the GA search, the population size is set at 50 organisms and the value of the crossover rate is set at 0.7 while the mutation rate is 0.1. As the stopping condition, 50 generations are permitted. The application data used in this study consists of technical indicators and the direction of change in the daily Korea stock price index (KOSPI). The total number of samples is 2218 trading days. We separate the whole data into three subsets as training, test, hold-out data set. The number of data in each subset is 1056, 581, 581 respectively. This study compares ISVM to several comparative models including logistic regression (logit), backpropagation neural networks (ANN), nearest neighbor (1-NN), conventional SVM (SVM) and SVM with the optimized parameters (PSVM). In especial, PSVM uses optimized kernel parameters by the genetic algorithm. The experimental results show that ISVM outperforms 1-NN by 15.32%, ANN by 6.89%, Logit and SVM by 5.34%, and PSVM by 4.82% for the holdout data. For ISVM, only 556 data from 1056 original training data are used to produce the result. In addition, the two-sample test for proportions is used to examine whether ISVM significantly outperforms other comparative models. The results indicate that ISVM outperforms ANN and 1-NN at the 1% statistical significance level. In addition, ISVM performs better than Logit, SVM and PSVM at the 5% statistical significance level.

Feature Selection and Performance Analysis using Quantum-inspired Genetic Algorithm (양자 유전알고리즘을 이용한 특징 선택 및 성능 분석)

  • Heo, G.S.;Jeong, H.T.;Park, A.;Baek, S.J.
    • Smart Media Journal
    • /
    • v.1 no.1
    • /
    • pp.36-41
    • /
    • 2012
  • Feature selection is the important technique of selecting a subset of relevant features for building robust pattern recognition systems. Various methods have been studied for feature selection from sequential search algorithms to stochastic algorithms. In this work, we adopted a Quantum-inspired Genetic Algorithm (QGA) which is based on the concept and principles of quantum computing such as Q-bits and superposition of state for feature selection. The performance of QGA is compared to that of the Conventional Genetic Algorithm (CGA) with respect to the classification rates and the number of selected features. The experimental result using UCI data sets shows that QGA is superior to CGA.

  • PDF