• Title/Summary/Keyword: Feature Classification

Search Result 2,155, Processing Time 0.024 seconds

Feature Selection for Classification of Mass Spectrometric Proteomic Data Using Random Forest (단백체 스펙트럼 데이터의 분류를 위한 랜덤 포리스트 기반 특성 선택 알고리즘)

  • Ohn, Syng-Yup;Chi, Seung-Do;Han, Mi-Young
    • Journal of the Korea Society for Simulation
    • /
    • v.22 no.4
    • /
    • pp.139-147
    • /
    • 2013
  • This paper proposes a novel method for feature selection for mass spectrometric proteomic data based on Random Forest. The method includes an effective preprocessing step to filter a large amount of redundant features with high correlation and applies a tournament strategy to get an optimal feature subset. Experiments on three public datasets, Ovarian 4-3-02, Ovarian 7-8-02 and Prostate shows that the new method achieves high performance comparing with widely used methods and balanced rate of specificity and sensitivity.

Feature Selection by Genetic Algorithm and Information Theory (유전자 알고리즘과 정보이론을 이용한 속성선택)

  • Cho, Jae-Hoon;Lee, Dae-Jong;Song, Chang-Kyu;Kim, Yong-Sam;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.94-99
    • /
    • 2008
  • In the pattern classification problem, feature selection is an important technique to improve performance of the classifiers. Particularly, in the case of classifying with a large number of features or variables, the accuracy of the classifier can be improved by using the relevant feature subset to remove the irrelevant, redundant, or noisy data. In this paper we propose a feature selection method using genetic algorithm and information theory. Experimental results show that this method can achieve better performance for pattern recognition problems than conventional ones.

Feature Selection Based on Class Separation in Handwritten Numeral Recognition Using Neural Network (신경망을 이용한 필기 숫자 인식에서 부류 분별에 기반한 특징 선택)

  • Lee, Jin-Seon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.2
    • /
    • pp.543-551
    • /
    • 1999
  • The primary purposes in this paper are to analyze the class separation of features in handwritten numeral recognition and to make use of the results in feature selection. Using the Parzen window technique, we compute the class distributions and define the class separation to be the overlapping distance of two class distributions. The dimension of a feature vector is reduced by removing the void or redundant feature cells based on the class separation information. The experiments have been performed on the CENPARMI handwritten numeral database, and partial classification and full classification have been tested. The results show that the class separation is very effective for the feature selection in the 10-class handwritten numeral recognition problem since we could reduce the dimension of the original 256-dimensional feature vector by 22%.

  • PDF

Texture Classification by a Fusion of Weighted Feature (가중치 특징 벡터를 이용한 질감 영상 인식 방법)

  • 정수연;곽동민;윤옥경;박길흠
    • Proceedings of the IEEK Conference
    • /
    • 2001.09a
    • /
    • pp.407-410
    • /
    • 2001
  • 최근 영상 검색(retrieval)과 분류(classification)에서 질감 특징(texture feature)을 이용한 연구들이 활발하게 진행되고 있다. 본 논문에서는 효율적인 질감 특징 추출을 위해 명암도 상호발생 행렬법(gray level co-occurrence matrix)과 웨이블릿 변환(wavelet transform)을 이용하여 질감의 특징을 추출한 후 특징의 중요도에 따라서 가중치를 부여하는 방법을 제안한다. 이렇게 추출된 가중치 대표 벡터들을 기반으로 베이시안 분류기(Bayesian classifier)를 통해 임의의 질감을 인식하였다.

  • PDF

Classification of High Impedance Fault Patterns by Recognition of Linear Prediction coefficients (선형 예측 계수의 인식에 의한 고저항 지락사고 유형의 분류)

  • Lee, Ho-Seob;Kong, Seong-Gon
    • Proceedings of the KIEE Conference
    • /
    • 1996.07b
    • /
    • pp.1353-1355
    • /
    • 1996
  • This paper presents classification of high impedance fault pattern using linear prediction coefficients. A feature of neutral phase current is extracted by the linear predictive coding. This feature is classified into faults by a multilayer perceptron neural network. Neural network successfully classifies test data into three faults and one normal state.

  • PDF

Compiler Analysis Framework Using SVM-Based Genetic Algorithm : Feature and Model Selection Sensitivity (SVM 기반 유전 알고리즘을 이용한 컴파일러 분석 프레임워크 : 특징 및 모델 선택 민감성)

  • Hwang, Cheol-Hun;Shin, Gun-Yoon;Kim, Dong-Wook;Han, Myung-Mook
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.4
    • /
    • pp.537-544
    • /
    • 2020
  • Advances in detection techniques, such as mutation and obfuscation, are being advanced with the development of malware technology. In the malware detection technology, unknown malware detection technology is important, and a method for Malware Authorship Attribution that detects an unknown malicious code by identifying the author through distributed malware is being studied. In this paper, we try to extract the compiler information affecting the binary-based author identification method and to investigate the sensitivity of feature selection, probability and non-probability models, and optimization to classification efficiency between studies. In the experiment, the feature selection method through information gain and the support vector machine, which is a non-probability model, showed high efficiency. Among the optimization studies, high classification accuracy was obtained through feature selection and model optimization through the proposed framework, and resulted in 48% feature reduction and 53 faster execution speed. Through this study, we can confirm the sensitivity of feature selection, model, and optimization methods to classification efficiency.

A Weighted Fuzzy Min-Max Neural Network for Pattern Classification (패턴 분류 문제에서 가중치를 고려한 퍼지 최대-최소 신경망)

  • Kim Ho-Joon;Park Hyun-Jung
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.8
    • /
    • pp.692-702
    • /
    • 2006
  • In this study, a weighted fuzzy min-max (WFMM) neural network model for pattern classification is proposed. The model has a modified structure of FMM neural network in which the weight concept is added to represent the frequency factor of feature values in a learning data set. First we present in this paper a new activation function of the network which is defined as a hyperbox membership function. Then we introduce a new learning algorithm for the model that consists of three kinds of processes: hyperbox creation/expansion, hyperbox overlap test, and hyperbox contraction. A weight adaptation rule considering the frequency factors is defined for the learning process. Finally we describe a feature analysis technique using the proposed model. Four kinds of relevance factors among feature values, feature types, hyperboxes and patterns classes are proposed to analyze relative importance of each feature in a given problem. Two types of practical applications, Fisher's Iris data and Cleveland medical data, have been used for the experiments. Through the experimental results, the effectiveness of the proposed method is discussed.

A Study on Reducing Learning Time of Deep-Learning using Network Separation (망 분리를 이용한 딥러닝 학습시간 단축에 대한 연구)

  • Lee, Hee-Yeol;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.25 no.2
    • /
    • pp.273-279
    • /
    • 2021
  • In this paper, we propose an algorithm that shortens the learning time by performing individual learning using partitioning the deep learning structure. The proposed algorithm consists of four processes: network classification origin setting process, feature vector extraction process, feature noise removal process, and class classification process. First, in the process of setting the network classification starting point, the division starting point of the network structure for effective feature vector extraction is set. Second, in the feature vector extraction process, feature vectors are extracted without additional learning using the weights previously learned. Third, in the feature noise removal process, the extracted feature vector is received and the output value of each class is learned to remove noise from the data. Fourth, in the class classification process, the noise-removed feature vector is input to the multi-layer perceptron structure, and the result is output and learned. To evaluate the performance of the proposed algorithm, we experimented with the Extended Yale B face database. As a result of the experiment, in the case of the time required for one-time learning, the proposed algorithm reduced 40.7% based on the existing algorithm. In addition, the number of learning up to the target recognition rate was shortened compared with the existing algorithm. Through the experimental results, it was confirmed that the one-time learning time and the total learning time were reduced and improved over the existing algorithm.

Cepstral Distance and Log-Energy Based Silence Feature Normalization for Robust Speech Recognition (강인한 음성인식을 위한 켑스트럼 거리와 로그 에너지 기반 묵음 특징 정규화)

  • Shen, Guang-Hu;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.4
    • /
    • pp.278-285
    • /
    • 2010
  • The difference between training and test environments is one of the major performance degradation factors in noisy speech recognition and many silence feature normalization methods were proposed to solve this inconsistency. Conventional silence feature normalization method represents higher classification performance in higher SNR, but it has a problem of performance degradation in low SNR due to the low accuracy of speech/silence classification. On the other hand, cepstral distance represents well the characteristic distribution of speech/silence (or noise) in low SNR. In this paper, we propose a Cepstral distance and Log-energy based Silence Feature Normalization (CLSFN) method which uses both log-energy and cepstral euclidean distance to classify speech/silence for better performance. Because the proposed method reflects both the merit of log energy being less affected with noise in high SNR and the merit of cepstral distance having high discrimination accuracy for speech/silence classification in low SNR, the classification accuracy will be considered to be improved. The experimental results showed that our proposed CLSFN presented the improved recognition performances comparing with the conventional SFN-I/II and CSFN methods in all kinds of noisy environments.