• Title/Summary/Keyword: 분류화

Search Result 4,792, Processing Time 0.028 seconds

Automatic Text Categorization by using Normalized Term Frequency Weighting (정규화 용어빈도가중치에 의한 자동문서분류)

  • 김수진;김민수;백장선;박혁로
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04c
    • /
    • pp.510-512
    • /
    • 2003
  • 본 논문에서는 문서의 자동 분류를 위한 용어 빈도 가중치 계산 방법으로 Box-Cox변환기법을 응용한 정규화 용어빈도 가중치를 정의하고, 이를 문서 분류에 적응하였다. 여기서 Box-Cox 변환기법이란 자료를 정규분포화 할 때 적용하는 통계적인 변환방법으로서, 본 논문에서는 이를 응용하여 새로운 용어빈도가중치 계산법을 제안한다. 문서에서 등장한 용어 빈도는 너무 많거나 적게 등장할 경우, 중요도가 떨어지게 되는데, 이는 용어의 중요도가 빈도에 따른 정규분포로 모델링 될 수 있다는 것을 의미한다. 또한 정규화 가중치 계산방법은 기존의 용어빈도 가중치 공식과 비교할 때, 용어마다 계산방법이 달라져, 로그나 루트와 같은 고정된 가중치 방법보다는 좀더 일반적인 방법이라 할 수 있다. 신문기사 8000건을 대상으로 4개의 그룹으로 나누어 실험 한 결과, 정규화 용어빈도가중치 계산방법이 모두 우위의 분류 정확도롤 가져, 본 논문에서 제안한 방법이 타당함을 알 수 있다.

  • PDF

A Question Type Classifier Using a Support Vector Machine (지지 벡터 기계를 이용한 질의 유형 분류기)

  • An, Young-Hun;Kim, Hark-Soo;Seo, Jung-Yun
    • Annual Conference on Human and Language Technology
    • /
    • 2002.10e
    • /
    • pp.129-136
    • /
    • 2002
  • 고성능의 질의응답 시스템을 구현하기 위해서는 사용자의 질의 유형의 난이도에 관계없이 의도를 파악할 수 있는 질의유형 분류기가 필요하다. 본 논문에서는 문서 범주화 기법을 이용한 질의 유형 분류기를 제안한다. 본 논문에서 제안하는 질의 유형 분류기의 분류 과정은 다음과 같다. 우선, 사용자 질의에 포함된 어휘, 품사, 의미표지와 같은 다양한 정보를 이용하여 사용자 질의로부터 자질들을 추출한다. 이 과정에서 질의의 구문 특성을 반영하기 위해서 슬라이딩 윈도 기법을 이용한다. 또한, 다량의 자질들 중에서 유용한 것들만을 선택하기 위해서 카이 제곱 통계량을 이용한다. 추출된 자질들은 벡터 공간 모델로 표현되고, 문서 범주화 기법 중 하나인 지지 벡터 기계(support vector machine, SVM)는 이 정보들을 이용하여 질의 유형을 분류한다. 본 논문에서 제안하는 시스템은 질의 유형 분류 문제에지지 벡터 기계를 이용한 자동문서 범주화 기법을 도입하여 86.4%의 높은 분류 정확도를 보였다. 또한 질의 유형 분류기를 통계적 방법으로 구축함으로써 lexico-syntactic 패턴과 같은 규칙을 기술하는 수작업을 배제할 수 있으며, 응용 영역의 변화에 대해서도 안정적인 처리와 빠른 이식성을 보장한다.

  • PDF

Analysis of normalization effect for earthquake events classification (지진 이벤트 분류를 위한 정규화 기법 분석)

  • Zhang, Shou;Ku, Bonhwa;Ko, Hansoek
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.2
    • /
    • pp.130-138
    • /
    • 2021
  • This paper presents an effective structure by applying various normalization to Convolutional Neural Networks (CNN) for seismic event classification. Normalization techniques can not only improve the learning speed of neural networks, but also show robustness to noise. In this paper, we analyze the effect of input data normalization and hidden layer normalization on the deep learning model for seismic event classification. In addition an effective model is derived through various experiments according to the structure of the applied hidden layer. As a result of various experiments, the model that applied input data normalization and weight normalization to the first hidden layer showed the most stable performance improvement.

An Efficient Deep Learning Ensemble Using a Distribution of Label Embedding

  • Park, Saerom
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.1
    • /
    • pp.27-35
    • /
    • 2021
  • In this paper, we propose a new stacking ensemble framework for deep learning models which reflects the distribution of label embeddings. Our ensemble framework consists of two phases: training the baseline deep learning classifier, and training the sub-classifiers based on the clustering results of label embeddings. Our framework aims to divide a multi-class classification problem into small sub-problems based on the clustering results. The clustering is conducted on the label embeddings obtained from the weight of the last layer of the baseline classifier. After clustering, sub-classifiers are constructed to classify the sub-classes in each cluster. From the experimental results, we found that the label embeddings well reflect the relationships between classification labels, and our ensemble framework can improve the classification performance on a CIFAR 100 dataset.

An Experimental Study on Feature Selection Using Wikipedia for Text Categorization (위키피디아를 이용한 분류자질 선정에 관한 연구)

  • Kim, Yong-Hwan;Chung, Young-Mee
    • Journal of the Korean Society for information Management
    • /
    • v.29 no.2
    • /
    • pp.155-171
    • /
    • 2012
  • In text categorization, core terms of an input document are hardly selected as classification features if they do not occur in a training document set. Besides, synonymous terms with the same concept are usually treated as different features. This study aims to improve text categorization performance by integrating synonyms into a single feature and by replacing input terms not in the training document set with the most similar term occurring in training documents using Wikipedia. For the selection of classification features, experiments were performed in various settings composed of three different conditions: the use of category information of non-training terms, the part of Wikipedia used for measuring term-term similarity, and the type of similarity measures. The categorization performance of a kNN classifier was improved by 0.35~1.85% in $F_1$ value in all the experimental settings when non-learning terms were replaced by the learning term with the highest similarity above the threshold value. Although the improvement ratio is not as high as expected, several semantic as well as structural devices of Wikipedia could be used for selecting more effective classification features.

A Study on the Classification of Hand-written Korean Character Types using Hough Transform (Hough Transform을 이용한 한글 필기체 형식 분류에 관한 연구)

  • 구하성;고경화
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.10
    • /
    • pp.1991-2000
    • /
    • 1994
  • In this paper, an alagorithm with six types of classification is suggested for the recognition system of hand-written Korean characters. After thinning process and truncating process for noise redection. The input images are used generalized by $64\times64$ size. The six type classification is composed of preliminary and secondary classification process by using the learning algoritm of multi-layer perceptron. Subblock Hough transform is used as local feature and sampling Hough transform is used as global feature. Experiment is conducted for 1800 characters which is written 31 times per each type by 10 persons. The 90% recognition rate is resulted by the preliminary classification of detection the final consonant and by the secondary classification of detecting the vowels.

  • PDF

A Text Categorization Method Improved by Removing Noisy Training Documents (오류 학습 문서 제거를 통한 문서 범주화 기법의 성능 향상)

  • Han, Hyoung-Dong;Ko, Young-Joong;Seo, Jung-Yun
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.9
    • /
    • pp.912-919
    • /
    • 2005
  • When we apply binary classification to multi-class classification for text categorization, we use the One-Against-All method generally, However, this One-Against-All method has a problem. That is, documents of a negative set are not labeled by human. Thus, they can include many noisy documents in the training data. In this paper, we propose that the Sliding Window technique and the EM algorithm are applied to binary text classification for solving this problem. We here improve binary text classification through extracting noise documents from the training data by the Sliding Window technique and re-assigning categories of these documents using the EM algorithm.

An Experimental Study on Feature Ranking Schemes for Text Classification (텍스트 분류를 위한 자질 순위화 기법에 관한 연구)

  • Pan Jun Kim
    • Journal of the Korean Society for information Management
    • /
    • v.40 no.1
    • /
    • pp.1-21
    • /
    • 2023
  • This study specifically reviewed the performance of the ranking schemes as an efficient feature selection method for text classification. Until now, feature ranking schemes are mostly based on document frequency, and relatively few cases have used the term frequency. Therefore, the performance of single ranking metrics using term frequency and document frequency individually was examined as a feature selection method for text classification, and then the performance of combination ranking schemes using both was reviewed. Specifically, a classification experiment was conducted in an environment using two data sets (Reuters-21578, 20NG) and five classifiers (SVM, NB, ROC, TRA, RNN), and to secure the reliability of the results, 5-Fold cross-validation and t-test were applied. As a result, as a single ranking scheme, the document frequency-based single ranking metric (chi) showed good performance overall. In addition, it was found that there was no significant difference between the highest-performance single ranking and the combination ranking schemes. Therefore, in an environment where sufficient learning documents can be secured in text classification, it is more efficient to use a single ranking metric (chi) based on document frequency as a feature selection method.

Analysis and Alternative of Classification Errors in Public Libraries (공공도서관 분류오류의 실증적 분석과 대안)

  • 윤희윤
    • Journal of Korean Library and Information Science Society
    • /
    • v.34 no.1
    • /
    • pp.43-65
    • /
    • 2003
  • Libraries have long experience of applying classification schemes to resources - chiefly books. The ultimate goals of classification are systematic shelving of books and ease of user's access. In order to achieve these goals, books about a particular field of knowledge should be shelved together and near each other. If not so, it is classification error. The focus of this study is, therefore, on analysing the classification error in Korea public libraries and suggesting some alternatives.

  • PDF

Analysis of Network Traffic using Classification and Association Rule (데이터 마이닝의 분류화와 연관 규칙을 이용한 네트워크 트래픽 분석)

  • 이창언;김응모
    • Journal of the Korea Society for Simulation
    • /
    • v.11 no.4
    • /
    • pp.15-23
    • /
    • 2002
  • As recently the network environment and application services have been more complex and diverse, there has. In this paper we introduce a scheme the extract useful information for network management by analyzing traffic data in user login file. For this purpose we use classification and association rule based on episode concept in data mining. Since login data has inherently time series characterization, convertible data mining algorithms cannot directly applied. We generate virtual transaction, classify transactions above threshold value in time window, and simulate the classification algorithm.

  • PDF