• Title/Summary/Keyword: 범주자질

Search Result 93, Processing Time 0.022 seconds

A Reconsideration on the Efficiency of the Extended Projection Principle (데이터분석을 통한 확대투사원리의 효율성 제고)

  • Joo, Chi-Woon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.10
    • /
    • pp.219-228
    • /
    • 2011
  • Main concern will be put at suggesting an alternative idea about the basic notion of the Extended Projection Principle (henceforth, ECP) which has been slightly changed since the initial appearance of the EPP. The EPP had been dependent on Case and theta-role under the era of the early generative grammar, whereas it was reduced only to the categorial feature [D] under the minimalism. Various data such as Locative Inversion constructions, there-expletive constructions, and sentences related to binding theory will be dealt with to suggest an plausible alternative idea. As a conclusion, it will be attested that the SPEC position of the inflectional clause should be filled with a maximally projected lexical item. This conclusion will be reached by analyzing lots of linguistic data.

Secondary Grammaticalization and English Adverbial Tense (이차적 문법화와 영어부사의 시제)

  • Kim, Yangsoon
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.115-121
    • /
    • 2020
  • The primary aim of this paper is to discuss the historical development or grammaticalization of English adverbial -ly suffix and provide a diachronic analysis of manner adverbs and sentence adverbs from the perspective of secondary grammaticalization. The grammaticalization includes both the primary grammaticalization from a lexical to a grammatical and the secondary grammaticalization from a less grammatical to a more grammatical status. The emergence of the manner adverbs is due to the primary grammaticalization from OE adjectival suffix -lic to ME adverbial suffix -ly. In contrast, the emergence of sentence adverbs is due to the secondary grammaticalization from manner adverbs in VP domain to sentence adverbs in TP domain with grammatical features of tense and modality. This paper concludes that the secondary grammaticalization of the English adverbial -ly suffix includes the change from manner adverbs to sentence adverbs which obtain a new grammatical function of tense and modality.

A Study on Book Categorization in Social Sciences Using kNN Classifiers and Table of Contents Text (목차 정보와 kNN 분류기를 이용한 사회과학 분야 도서 자동 분류에 관한 연구)

  • Lee, Yong-Gu
    • Journal of the Korean Society for information Management
    • /
    • v.37 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • This study applied automatic classification using table of contents (TOC) text for 6,253 social science books from a newly arrived list collected by a university library. The k-nearest neighbors (kNN) algorithm was used as a classifier, and the ten divisions on the second level of the DDC's main class 300 given to books by the library were used as classes (labels). The features used in this study were keywords extracted from titles and TOCs of the books. The TOCs were obtained through the OpenAPI from an Internet bookstore. As a result, it was found that the TOC features were good for improving both classification recall and precision. The TOC was shown to reduce the overfitting problem of imbalanced data with its rich features. Law and education have high topic specificity in the field of social sciences, so the only title features can bring good classification performance in these fields.

A Study on Statistical Feature Selection with Supervised Learning for Word Sense Disambiguation (단어 중의성 해소를 위한 지도학습 방법의 통계적 자질선정에 관한 연구)

  • Lee, Yong-Gu
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.22 no.2
    • /
    • pp.5-25
    • /
    • 2011
  • This study aims to identify the most effective statistical feature selecting method and context window size for word sense disambiguation using supervised methods. In this study, features were selected by four different methods: information gain, document frequency, chi-square, and relevancy. The result of weight comparison showed that identifying the most appropriate features could improve word sense disambiguation performance. Information gain was the highest. SVM classifier was not affected by feature selection and showed better performance in a larger feature set and context size. Naive Bayes classifier was the best performance on 10 percent of feature set size. kNN classifier on under 10 percent of feature set size. When feature selection methods are applied to word sense disambiguation, combinations of a small set of features and larger context window size, or a large set of features and small context windows size can make best performance improvements.

A Study on Automatic Text Categorization of Web-Based Query Using Synonymy List (유사어 사전을 이용한 웹기반 질의문의 자동 범주화에 관한 연구)

  • Nam, Young-Joon;Kim, Gyu-Hwan
    • Journal of Information Management
    • /
    • v.35 no.4
    • /
    • pp.81-105
    • /
    • 2004
  • In this study, the way of the automatic text categorization on web-based query was implemented. X2 methods based on the Supported Vector Machine were used to test the efficiency of text categorization on queries. This test is carried out by the model using the Synonymy List. 713 synonyms were extracted manually from the tested documents. As the result of this test, the precision ratio and the recall ratio were decreased by -0.01% and by 8.53%, respectively whether the synonyms were assigned or not. It also shows that the Value of F1 Measure was increased by 4.58%. The standard deviation between the recall and precision ratio was improve by 18.39%.

Automatic Document Classification Based on Word Frequency Weight (단어 빈도 가중치를 이용한 자동 문서 분류)

  • Noh, Hyun-A;Kim, Min-Soo;Kim, Soo-Hyung;Park, Hyuk-Ro
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.11a
    • /
    • pp.581-584
    • /
    • 2002
  • 본 논문에서는 범주 내의 키워드 빈도에 의해 문서를 자동으로 분류하는 방법을 제안한다. 문서 자동분류 시스템에서는 문서와 문서를 비교하기 위해서 분류 자질(feature)에 적절한 가중치를 부여할 필요가 있다. 본 논문에서는 수작업으로 분류된 신문기사를 이용하여 자질의 가중치를 학습하는 방법을 사용하였다. 기존의 용어가중치 방법은 각 범주별로 가장 많이 등장한 명사부터 순서대로 추출하여 가중치를 주는 방법을 사용한 것에 비해 본 논문에서는 명사의 출현 횟수뿐만 아니라 출현위치를 함께 고려하여 가중치를 계산하는 방법을 제안한다. 또한 단어 빈도 가중치 방법의 변형된 방식을 사용함으로써 기존의 단어 빈도 가중치 방법과 비교하여 분류 정확도 측면에서 9%이상 성능 향상을 있음을 보인다.

  • PDF

A Semantic-Based Feature Expansion Approach for Improving the Effectiveness of Text Categorization by Using WordNet (문서범주화 성능 향상을 위한 의미기반 자질확장에 관한 연구)

  • Chung, Eun-Kyung
    • Journal of the Korean Society for information Management
    • /
    • v.26 no.3
    • /
    • pp.261-278
    • /
    • 2009
  • Identifying optimal feature sets in Text Categorization(TC) is crucial in terms of improving the effectiveness. In this study, experiments on feature expansion were conducted using author provided keyword sets and article titles from typical scientific journal articles. The tool used for expanding feature sets is WordNet, a lexical database for English words. Given a data set and a lexical tool, this study presented that feature expansion with synonymous relationship was significantly effective on improving the results of TC. The experiment results pointed out that when expanding feature sets with synonyms using on classifier names, the effectiveness of TC was considerably improved regardless of word sense disambiguation.

(A Question Type Classifier based on a Support Vector Machine for a Korean Question-Answering System) (한국어 질의응답시스템을 위한 지지 벡터기계 기반의 질의유형분류기)

  • 김학수;안영훈;서정연
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.5_6
    • /
    • pp.466-475
    • /
    • 2003
  • To build an efficient Question-Answering (QA) system, a question type classifier is needed. It can classify user's queries into predefined categories regardless of the surface form of a question. In this paper, we propose a question type classifier using a Support Vector Machine (SVM). The question type classifier first extracts features like lexical forms, part of speech and semantic markers from a user's question. The system uses $X^2$ statistic to select important features. Selected features are represented as a vector. Finally, a SVM categorizes questions into predefined categories according to the extracted features. In the experiment, the proposed system accomplished 86.4% accuracy The system precisely classifies question type without using any rules like lexico-syntactic patterns. Therefore, the system is robust and easily portable to other domains.

A Web Page Categorization Model Based on Document Structural Information (문서 구조 정보에 기반한 웹 페이지 범주화 모델)

  • Jung, Sung-Hwa;Lee, Jong-Hyeok
    • Annual Conference on Human and Language Technology
    • /
    • 1998.10c
    • /
    • pp.91-96
    • /
    • 1998
  • 본 논문에서는 주제범주 체계를 이용한 웹 검색이 가지는 장점을 이용 할 수 있도록 인터넷 웹 페이지들을 주제범주 체계에 따라 자동으로 분류하는 모델을 제시한다. 특히 웹 페이지 작성자들의 의도를 범주화에 반영할 수 있는 방법으로 HTML 태그를 이용한다. 즉 웹 페이지의 표현에 있어서 벡터 스페이스 모델에서의 색인어 빈도 가중치에 태그 가중치를 추가 하여 보다 좋은 성능을 얻도록 하였다. 그리고 주제범주를 표현하는데 사용되는 자질의 선정에는 기대상호정보, 상호정보 척도를, 문서간 유사도 비교에는 최근린법을 사용하였다. 전북대에서 정보탐정용으로 분류한 웹 페이지를 대상으로 실험하였으며, 기본 모델 대비 약 7%의 정확도 향상을 얻을 수 있었다.

  • PDF

Linguistic Features Discrimination for Social Issue Risk Classification (사회적 이슈 리스크 유형 분류를 위한 어휘 자질 선별)

  • Oh, Hyo-Jung;Yun, Bo-Hyun;Kim, Chan-Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.541-548
    • /
    • 2016
  • The use of social media is already essential as a source of information for listening user's various opinions and monitoring. We define social 'risks' that issues effect negative influences for public opinion in social media. This paper aims to discriminate various linguistic features and reveal their effects for building an automatic classification model of social risks. Expecially we adopt a word embedding technique for representation of linguistic clues in risk sentences. As a preliminary experiment to analyze characteristics of individual features, we revise errors in automatic linguistic analysis. At the result, the most important feature is NE (Named Entity) information and the best condition is when combine basic linguistic features. word embedding, and word clusters within core predicates. Experimental results under the real situation in social bigdata - including linguistic analysis errors - show 92.08% and 85.84% in precision respectively for frequent risk categories set and full test set.