• 제목/요약/키워드: Feature Classification

검색결과 2,155건 처리시간 0.033초

Modified ECCD 및 문서별 범주 가중치를 이용한 문서 분류 시스템 (A Document Classification System Using Modified ECCD and Category Weight for each Document)

  • 한정석;박상용;이수원
    • 정보처리학회논문지B
    • /
    • 제19B권4호
    • /
    • pp.237-242
    • /
    • 2012
  • 웹 문서 정보 서비스는 관리자의 효율적 문서관리와 사용자의 문서검색 편의성을 위해 문서 분류 시스템을 필요로 한다. 기존의 문서 분류 시스템은 분류하고자 하는 문서 내 선택된 자질어의 개수가 적거나, 특정 범주의 문서 비율이 높아 그 범주에서 대부분의 자질어가 선택되어 모델이 생성된 경우 분류 정확도가 저하되는 문제점을 가진다. 이러한 문제점을 해결하기 위해 본 논문에서는 'Modified ECCD' 기법 및 '문서별 범주 가중치' 특징 변수를 사용한 문서 분류 시스템을 제안한다. 실험 결과, 제안 방법인 'Modified ECCD' 기법이 ${\chi}^2$ 및 ECCD 기법에 비해 높은 분류 성능을 보였으며, '문서별 범주 가중치' 특징 변수를 'Modified ECCD' 기법으로 선택된 자질어 변수에 추가하여 학습하였을 경우에 더 높은 분류 성능을 보였다.

자질 선정 기준과 가중치 할당 방식간의 관계를 고려한 문서 자동분류의 개선에 대한 연구 (An Empirical Study on Improving the Performance of Text Categorization Considering the Relationships between Feature Selection Criteria and Weighting Methods)

  • 이재윤
    • 한국문헌정보학회지
    • /
    • 제39권2호
    • /
    • pp.123-146
    • /
    • 2005
  • 이 연구에서는 문서 자동분류에서 분류자질 선정과 가중치 할당을 위해서 일관된 전략을 채택하여 kNN 분류기의 성능을 향상시킬 수 있는 방안을 모색하였다. 문서 자동 분류에서 분류자질 선정 방식과 자질 가중치 할당 방식은 자동분류 알고리즘과 함께 분류성능을 좌우하는 중요한 요소이다. 기존 연구에서는 이 두 방식을 결정할 때 상반된 전략을 사용해왔다. 이 연구에서는 색인파일 저장공간과 실행시간에 따른 분류성능을 기준으로 분류자질 선정 결과를 평가해서 기존 연구와 다른 결과를 얻었다. 상호정보량과 같은 저빈도 자질 선호 기준이나 심지어는 역문헌빈도를 이용해서 분류 자질을 선정하는 것이 kNN 분류기의 분류 효과와 효율 면에서 바람직한 것으로 나타났다. 자질 선정기준으로 저빈도 자질 선호 척도를 자질 선정 및 자질 가중치 할당에 일관되게 이용한 결과 분류성능의 저하 없이 kNN 분류기의 처리 속도를 약 3배에서 5배정도 향상시킬 수 있었다.

Classification of TV Program Scenes Based on Audio Information

  • Lee, Kang-Kyu;Yoon, Won-Jung;Park, Kyu-Sik
    • The Journal of the Acoustical Society of Korea
    • /
    • 제23권3E호
    • /
    • pp.91-97
    • /
    • 2004
  • In this paper, we propose a classification system of TV program scenes based on audio information. The system classifies the video scene into six categories of commercials, basketball games, football games, news reports, weather forecasts and music videos. Two type of audio feature set are extracted from each audio frame-timbral features and coefficient domain features which result in 58-dimensional feature vector. In order to reduce the computational complexity of the system, 58-dimensional feature set is further optimized to yield l0-dimensional features through Sequential Forward Selection (SFS) method. This down-sized feature set is finally used to train and classify the given TV program scenes using κ -NN, Gaussian pattern matching algorithm. The classification result of 91.6% reported here shows the promising performance of the video scene classification based on the audio information. Finally, the system stability problem corresponding to different query length is investigated.

Music Genre Classification Based on Timbral Texture and Rhythmic Content Features

  • Baniya, Babu Kaji;Ghimire, Deepak;Lee, Joonwhon
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2013년도 춘계학술발표대회
    • /
    • pp.204-207
    • /
    • 2013
  • Music genre classification is an essential component for music information retrieval system. There are two important components to be considered for better genre classification, which are audio feature extraction and classifier. This paper incorporates two different kinds of features for genre classification, timbral texture and rhythmic content features. Timbral texture contains several spectral and Mel-frequency Cepstral Coefficient (MFCC) features. Before choosing a timbral feature we explore which feature contributes less significant role on genre discrimination. This facilitates the reduction of feature dimension. For the timbral features up to the 4-th order central moments and the covariance components of mutual features are considered to improve the overall classification result. For the rhythmic content the features extracted from beat histogram are selected. In the paper Extreme Learning Machine (ELM) with bagging is used as classifier for classifying the genres. Based on the proposed feature sets and classifier, experiment is performed with well-known datasets: GTZAN databases with ten different music genres, respectively. The proposed method acquires the better classification accuracy than the existing approaches.

다단계 특징벡터 기반의 분류기 모델 (Multistage Feature-based Classification Model)

  • 송영수;박동철
    • 전자공학회논문지CI
    • /
    • 제46권1호
    • /
    • pp.121-127
    • /
    • 2009
  • 본 논문은 다단계 특징벡터를 이용한 분류기 모델(Multistage Feature-based Classification Model: MFCM)을 제안하는데, MFCM은 주어진 데이터에서 추출된 특징벡터 전체를 한 번에 이용하지 않고, 같은 성질들의 특징벡터들끼리 모아서, 여러 단계에 걸쳐서 분류에 이용한다. 학습단계에서, 같은 성질을 가지는 특징벡터 그룹 각각을 이용하는 국지적 분류기의 분류 정확도 산출을 통해 각 특징벡터그룹의 기여도를 측정한다. 분류단계에서는 각 특징벡터그룹의 기여도에 따라 차등적으로 가중치를 적용하여 최종적인 분류결론을 이끌어 낸다. 본 논문에서는 MFCM의 개념을 기존의 몇 가지 분류 알고리즘에 적용하고, 음악 장르 분류 문제에 응용하여, 제안된 알고리즘의 유용성에 관한 실험을 수행하였다. 실험의 결과 제안된 MFCM을 이용하는 분류기는 기존의 알고리즘과 비교하여 분류정확도에서 평균적으로 7%-13%의 성능향상을 보여준다.

다변량 데이터의 분류 성능 향상을 위한 특질 추출 및 분류 기법을 통합한 신경망 알고리즘 (Feature Selecting and Classifying Integrated Neural Network Algorithm for Multi-variate Classification)

  • 윤현수;백준걸
    • 산업공학
    • /
    • 제24권2호
    • /
    • pp.97-104
    • /
    • 2011
  • Research for multi-variate classification has been studied through two kinds of procedures which are feature selection and classification. Feature Selection techniques have been applied to select important features and the other one has improved classification performances through classifier applications. In general, each technique has been independently studied, however consideration of the interaction between both procedures has not been widely explored which leads to a degraded performance. In this paper, through integrating these two procedures, classification performance can be improved. The proposed model takes advantage of KBANN (Knowledge-Based Artificial Neural Network) which uses prior knowledge to learn NN (Neural Network) as training information. Each NN learns characteristics of the Feature Selection and Classification techniques as training sets. The integrated NN can be learned again to modify features appropriately and enhance classification performance. This innovative technique is called ALBNN (Algorithm Learning-Based Neural Network). The experiments' results show improved performance in various classification problems.

New Feature Selection Method for Text Categorization

  • Wang, Xingfeng;Kim, Hee-Cheol
    • Journal of information and communication convergence engineering
    • /
    • 제15권1호
    • /
    • pp.53-61
    • /
    • 2017
  • The preferred feature selection methods for text classification are filter-based. In a common filter-based feature selection scheme, unique scores are assigned to features; then, these features are sorted according to their scores. The last step is to add the top-N features to the feature set. In this paper, we propose an improved global feature selection scheme wherein its last step is modified to obtain a more representative feature set. The proposed method aims to improve the classification performance of global feature selection methods by creating a feature set representing all classes almost equally. For this purpose, a local feature selection method is used in the proposed method to label features according to their discriminative power on classes; these labels are used while producing the feature sets. Experimental results obtained using the well-known 20 Newsgroups and Reuters-21578 datasets with the k-nearest neighbor algorithm and a support vector machine indicate that the proposed method improves the classification performance in terms of a widely known metric ($F_1$).

Evaluating the Contribution of Spectral Features to Image Classification Using Class Separability

  • Ye, Chul-Soo
    • 대한원격탐사학회지
    • /
    • 제36권1호
    • /
    • pp.55-65
    • /
    • 2020
  • Image classification needs the spectral similarity comparison between spectral features of each pixel and the representative spectral features of each class. The spectral similarity is obtained by computing the spectral feature vector distance between the pixel and the class. Each spectral feature contributes differently in the image classification depending on the class separability of the spectral feature, which is computed using a suitable vector distance measure such as the Bhattacharyya distance. We propose a method to determine the weight value of each spectral feature in the computation of feature vector distance for the similarity measurement. The weight value is determined by the ratio between each feature separability value to the total separability values of all the spectral features. We created ten spectral features consisting of seven bands of Landsat-8 OLI image and three indices, NDVI, NDWI and NDBI. For three experimental test sites, we obtained the overall accuracies between 95.0% and 97.5% and the kappa coefficients between 90.43% and 94.47%.

Conditional Mutual Information-Based Feature Selection Analyzing for Synergy and Redundancy

  • Cheng, Hongrong;Qin, Zhiguang;Feng, Chaosheng;Wang, Yong;Li, Fagen
    • ETRI Journal
    • /
    • 제33권2호
    • /
    • pp.210-218
    • /
    • 2011
  • Battiti's mutual information feature selector (MIFS) and its variant algorithms are used for many classification applications. Since they ignore feature synergy, MIFS and its variants may cause a big bias when features are combined to cooperate together. Besides, MIFS and its variants estimate feature redundancy regardless of the corresponding classification task. In this paper, we propose an automated greedy feature selection algorithm called conditional mutual information-based feature selection (CMIFS). Based on the link between interaction information and conditional mutual information, CMIFS takes account of both redundancy and synergy interactions of features and identifies discriminative features. In addition, CMIFS combines feature redundancy evaluation with classification tasks. It can decrease the probability of mistaking important features as redundant features in searching process. The experimental results show that CMIFS can achieve higher best-classification-accuracy than MIFS and its variants, with the same or less (nearly 50%) number of features.

A Step towards the Improvement in the Performance of Text Classification

  • Hussain, Shahid;Mufti, Muhammad Rafiq;Sohail, Muhammad Khalid;Afzal, Humaira;Ahmad, Ghufran;Khan, Arif Ali
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권4호
    • /
    • pp.2162-2179
    • /
    • 2019
  • The performance of text classification is highly related to the feature selection methods. Usually, two tasks are performed when a feature selection method is applied to construct a feature set; 1) assign score to each feature and 2) select the top-N features. The selection of top-N features in the existing filter-based feature selection methods is biased by their discriminative power and the empirical process which is followed to determine the value of N. In order to improve the text classification performance by presenting a more illustrative feature set, we present an approach via a potent representation learning technique, namely DBN (Deep Belief Network). This algorithm learns via the semantic illustration of documents and uses feature vectors for their formulation. The nodes, iteration, and a number of hidden layers are the main parameters of DBN, which can tune to improve the classifier's performance. The results of experiments indicate the effectiveness of the proposed method to increase the classification performance and aid developers to make effective decisions in certain domains.