• Title/Summary/Keyword: Document Feature Extraction

Search Result 42, Processing Time 0.022 seconds

2-D Conditional Moment for Recognition of Deformed Letters

  • Yoon, Myoong-Young
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.6 no.2
    • /
    • pp.16-22
    • /
    • 2001
  • In this paper we mose a new scheme for recognition of deformed letters by extracting feature vectors based on Gibbs distributions which are well suited for representing the spatial continuity. The extracted feature vectors are comprised of 2-D conditional moments which are invariant under translation, rotation, and scale of an image. The Algorithm for pattern recognition of deformed letters contains two parts: the extraction of feature vector and the recognition process. (i) We extract feature vector which consists of an improved 2-D conditional moments on the basis of estimated conditional Gibbs distribution for an image. (ii) In the recognition phase, the minimization of the discrimination cost function for a deformed letters determines the corresponding template pattern. In order to evaluate the performance of the proposed scheme, recognition experiments with a generated document was conducted. on Workstation. Experiment results reveal that the proposed scheme has high recognition rate over 96%.

  • PDF

An Improved 2-D Moment Algorithm for Pattern Classification

  • Yoon, myoung-Young
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.4 no.2
    • /
    • pp.1-6
    • /
    • 1999
  • We propose a new algorithm for pattern classification by extracting feature vectors based on Gibbs distributions which are well suited for representing the characteristic of an images. The extracted feature vectors are comprised of 2-D moments which are invariant under translation rotation, and scale of the image less sensitive to noise. This implementation contains two puts: feature extraction and pattern classification First of all, we extract feature vector which consists of an improved 2-D moments on the basis of estimated Gibbs distribution Next, in the classification phase the minimization of the discrimination cost function for a specific pattern determines the corresponding template pattern. In order to evaluate the performance of the proposed scheme, classification experiments with training document sets of characters have been carried out on SUN ULTRA 10 Workstation Experiment results reveal that the proposed scheme had high classification rate over 98%.

  • PDF

Conditional Moment-based Classification of Patterns Using Spatial Information Based on Gibbs Random Fields (깁스확률장의 공간정보를 갖는 조건부 모멘트에 의한 패턴분류)

  • Kim, Ju-Sung;Yoon, Myoung-Young
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.6
    • /
    • pp.1636-1645
    • /
    • 1996
  • In this paper we proposed a new scheme for conditional two dimensional (2-D)moment-based classification of patterns on the basis of Gibbs random fields which are will suited for representing spatial continuity that is the characteristic of the most images. This implementation contains two parts: feature extraction and pattern classification. First of all, we extract feature vector which consists of conditional 2-D moments on the basis of estimated Gibbs parameter. Note that the extracted feature vectors are invariant under translation, rotation, size of patterns the corresponding template pattern. In order to evaluate the performance of the proposed scheme, classification experiments with training document sets of characters have been carried out on 486 66Mhz PC. Experiments reveal that the proposed scheme has high classification rate over 94%.

  • PDF

Cross-Domain Text Sentiment Classification Method Based on the CNN-BiLSTM-TE Model

  • Zeng, Yuyang;Zhang, Ruirui;Yang, Liang;Song, Sujuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.818-833
    • /
    • 2021
  • To address the problems of low precision rate, insufficient feature extraction, and poor contextual ability in existing text sentiment analysis methods, a mixed model account of a CNN-BiLSTM-TE (convolutional neural network, bidirectional long short-term memory, and topic extraction) model was proposed. First, Chinese text data was converted into vectors through the method of transfer learning by Word2Vec. Second, local features were extracted by the CNN model. Then, contextual information was extracted by the BiLSTM neural network and the emotional tendency was obtained using softmax. Finally, topics were extracted by the term frequency-inverse document frequency and K-means. Compared with the CNN, BiLSTM, and gate recurrent unit (GRU) models, the CNN-BiLSTM-TE model's F1-score was higher than other models by 0.0147, 0.006, and 0.0052, respectively. Then compared with CNN-LSTM, LSTM-CNN, and BiLSTM-CNN models, the F1-score was higher by 0.0071, 0.0038, and 0.0049, respectively. Experimental results showed that the CNN-BiLSTM-TE model can effectively improve various indicators in application. Lastly, performed scalability verification through a takeaway dataset, which has great value in practical applications.

A Korean Emotion Features Extraction Method and Their Availability Evaluation for Sentiment Classification (감정 분류를 위한 한국어 감정 자질 추출 기법과 감정 자질의 유용성 평가)

  • Hwang, Jae-Won;Ko, Young-Joong
    • Korean Journal of Cognitive Science
    • /
    • v.19 no.4
    • /
    • pp.499-517
    • /
    • 2008
  • In this paper, we propose an effective emotion feature extraction method for Korean and evaluate their availability in sentiment classification. Korean emotion features are expanded from several representative emotion words and they play an important role in building in an effective sentiment classification system. Firstly, synonym information of English word thesaurus is used to extract effective emotion features and then the extracted English emotion features are translated into Korean. To evaluate the extracted Korean emotion features, we represent each document using the extracted features and classify it using SVM(Support Vector Machine). In experimental results, the sentiment classification system using the extracted Korean emotion features obtained more improved performance(14.1%) than the system using content-words based features which have generally used in common text classification systems.

  • PDF

A Comparative Study on Using SentiWordNet for English Twitter Sentiment Analysis (영어 트위터 감성 분석을 위한 SentiWordNet 활용 기법 비교)

  • Kang, In-Su
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.4
    • /
    • pp.317-324
    • /
    • 2013
  • Twitter sentiment analysis is to classify a tweet (message) into positive and negative sentiment class. This study deals with SentiWordNet(SWN)-based twitter sentiment analysis. SWN is a sentiment dictionary in which each sense of an English word has a positive and negative sentimental strength. There has been a variety of SWN-based sentiment feature extraction methods which typically first determine the sentiment orientation (SO) of a term in a document and then decide SO of the document from such terms' SO values. For example, for SO of a term, some calculated the maximum or average of sentiment scores of its senses, and others computed the average of the difference of positive and negative sentiment scores. For SO of a document, many researchers employ the maximum or average of terms' SO values. In addition, the above procedure may be applied to the whole set (adjective, adverb, noun, and verb) of parts-of-speech or its subset. This work provides a comparative study on SWN-based sentiment feature extraction schemes with performance evaluation on a well-known twitter dataset.

Evaluation Model for Gab Analysis Between NCS Competence Unit Element and Traditional Curriculum (NCS 능력단위 요소와 기존 교육과정 간 갭 분석을 위한 평가모델)

  • Kim, Dae-kyung;Kim, Chang-Bok
    • Journal of Advanced Navigation Technology
    • /
    • v.19 no.4
    • /
    • pp.338-344
    • /
    • 2015
  • The national competency standards (NCS) is a systematize and standardize for skills required to perform their job. The NCS has developed a learning module with materialization and standardize by competence unit element, which is the unit of specific job competency. The existing curriculum is material to gab analysis for use in education training with competence unit element. The existing gab analysis has evaluated subjectively by experts. The gab analysis by experts bring up a subject subjective decision, accuracy lack, temporal and spatial inefficiency by psychological factor. This paper is proposed automated evaluation model for problem resolve of subjective evaluation. This paper use index term extraction, term frequency-inverse document frequency for feature value extraction, cosine similarity algorithm for gab analysis between existing curriculum and competence unit element. This paper was presented similarity mapping table between existing curriculum and competence unit element. The evaluation model in this paper should be complemented by an improved algorithm from the structural characteristics and speed.

Query-based Answer Extraction using Korean Dependency Parsing (의존 구문 분석을 이용한 질의 기반 정답 추출)

  • Lee, Dokyoung;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.161-177
    • /
    • 2019
  • In this paper, we study the performance improvement of the answer extraction in Question-Answering system by using sentence dependency parsing result. The Question-Answering (QA) system consists of query analysis, which is a method of analyzing the user's query, and answer extraction, which is a method to extract appropriate answers in the document. And various studies have been conducted on two methods. In order to improve the performance of answer extraction, it is necessary to accurately reflect the grammatical information of sentences. In Korean, because word order structure is free and omission of sentence components is frequent, dependency parsing is a good way to analyze Korean syntax. Therefore, in this study, we improved the performance of the answer extraction by adding the features generated by dependency parsing analysis to the inputs of the answer extraction model (Bidirectional LSTM-CRF). The process of generating the dependency graph embedding consists of the steps of generating the dependency graph from the dependency parsing result and learning the embedding of the graph. In this study, we compared the performance of the answer extraction model when inputting basic word features generated without the dependency parsing and the performance of the model when inputting the addition of the Eojeol tag feature and dependency graph embedding feature. Since dependency parsing is performed on a basic unit of an Eojeol, which is a component of sentences separated by a space, the tag information of the Eojeol can be obtained as a result of the dependency parsing. The Eojeol tag feature means the tag information of the Eojeol. The process of generating the dependency graph embedding consists of the steps of generating the dependency graph from the dependency parsing result and learning the embedding of the graph. From the dependency parsing result, a graph is generated from the Eojeol to the node, the dependency between the Eojeol to the edge, and the Eojeol tag to the node label. In this process, an undirected graph is generated or a directed graph is generated according to whether or not the dependency relation direction is considered. To obtain the embedding of the graph, we used Graph2Vec, which is a method of finding the embedding of the graph by the subgraphs constituting a graph. We can specify the maximum path length between nodes in the process of finding subgraphs of a graph. If the maximum path length between nodes is 1, graph embedding is generated only by direct dependency between Eojeol, and graph embedding is generated including indirect dependencies as the maximum path length between nodes becomes larger. In the experiment, the maximum path length between nodes is adjusted differently from 1 to 3 depending on whether direction of dependency is considered or not, and the performance of answer extraction is measured. Experimental results show that both Eojeol tag feature and dependency graph embedding feature improve the performance of answer extraction. In particular, considering the direction of the dependency relation and extracting the dependency graph generated with the maximum path length of 1 in the subgraph extraction process in Graph2Vec as the input of the model, the highest answer extraction performance was shown. As a result of these experiments, we concluded that it is better to take into account the direction of dependence and to consider only the direct connection rather than the indirect dependence between the words. The significance of this study is as follows. First, we improved the performance of answer extraction by adding features using dependency parsing results, taking into account the characteristics of Korean, which is free of word order structure and omission of sentence components. Second, we generated feature of dependency parsing result by learning - based graph embedding method without defining the pattern of dependency between Eojeol. Future research directions are as follows. In this study, the features generated as a result of the dependency parsing are applied only to the answer extraction model in order to grasp the meaning. However, in the future, if the performance is confirmed by applying the features to various natural language processing models such as sentiment analysis or name entity recognition, the validity of the features can be verified more accurately.

Performance Comparison of Keyword Extraction Methods for Web Document Cluster using Suffix Tree Clustering (Suffix Tree를 이용한 웹 문서 클러스터의 제목 생성 방법 성능 비교)

  • 염기종;권영식
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2002.11a
    • /
    • pp.328-335
    • /
    • 2002
  • 최근 들어 인터넷 기술의 발달로 웹 상에 많은 자료들이 산재해 있습니다. 사용자가 원하는 정보를 검색하기 위해서 키워드 검색을 이용하고 있는데 이러한 키워드 검색은 사용자들이 입력한 단편적인 정보에 바탕하여 검색하고 검색된 결과들을 자체적인 기준으로 순위를 매겨 나열식으로 제시하고 있다. 이러한 경우 사용자들의 생각과는 다르게 결과가 제시될 수 있다. 따라서 사용자들의 검색 시간을 줄이고 편리하게 검색하기 위한 환경의 필요성이 높아지고 있다. 본 논문에서는 Suffix Tree 알고리즘을 사용하여 관련있는 문서들을 분류하고 각각의 분류된 클러스터에 제목을 생성하기 위하여 문서 빈도수, 단어 빈도수와 역문서 빈도수, 카이 검정, 공통 정보, 엔트로피 방법을 비교 평가하여 제목을 생성하는데 어떠한 방법이 가장 효과적인지 알아보기 위해 비교 평가해본 결과 문서빈도수가 TF-IDF보다 약 10%정도 성능이 좋은 결과를 보여주었다.

  • PDF

Feature Selection and Extraction for Document Classifier for If documents based on SVM (SVM기반 정보기술 문서분류를 위한 특성 선택 및 추출 기법)

  • 강윤희
    • Proceedings of the KAIS Fall Conference
    • /
    • 2001.11a
    • /
    • pp.75-78
    • /
    • 2001
  • 본 논문에서는 웹 문서의 자동 분류를 위한 특성 선택 및 추출기법을 기술한다. 최근 인터넷의 급속한 성장과 보급으로 전자우편과 웹을 통해 제공되어지는 정보의 양이 기하급수적으로 증가함에 따라 효율적인 문서 분류의 필요성이 증가하고 있다. 본 논문에서는 웹 디렉토리 내의 문서로부터 추출된 용어 집합을 기반으로 SVM을 사용하여 학습한 후 문서 분류를 수행한다. 본 실험의 문서는 정보통신 분야 디렉토리 서비스 시스템인 itfind로부터 수집된 문서를 대상으로 하였으며 3가지 시나리오에 따라 실험을 수행하여 각 시나리오 별로 재현율/정확율 및 오분류율을 성능 요소로 계산하였다. 본 실험은 학습 벡터 구성과정에서 잡음에 의해 다른 클래스의 문서 분류에 미치는 영향을 평가하여 SVM을 기반으로 한 문서 분류 기법이 강건함을 보였다.