• Title/Summary/Keyword: Association Word Knowledge Base

Search Result 8, Processing Time 0.022 seconds

Weighted Bayesian Automatic Document Categorization Based on Association Word Knowledge Base by Apriori Algorithm (Apriori알고리즘에 의한 연관 단어 지식 베이스에 기반한 가중치가 부여된 베이지만 자동 문서 분류)

  • 고수정;이정현
    • Journal of Korea Multimedia Society
    • /
    • v.4 no.2
    • /
    • pp.171-181
    • /
    • 2001
  • The previous Bayesian document categorization method has problems that it requires a lot of time and effort in word clustering and it hardly reflects the semantic information between words. In this paper, we propose a weighted Bayesian document categorizing method based on association word knowledge base acquired by mining technique. The proposed method constructs weighted association word knowledge base using documents in training set. Then, classifier using Bayesian probability categorizes documents based on the constructed association word knowledge base. In order to evaluate performance of the proposed method, we compare our experimental results with those of weighted Bayesian document categorizing method using vocabulary dictionary by mutual information, weighted Bayesian document categorizing method, and simple Bayesian document categorizing method. The experimental result shows that weighted Bayesian categorizing method using association word knowledge base has improved performance 0.87% and 2.77% and 5.09% over weighted Bayesian categorizing method using vocabulary dictionary by mutual information and weighted Bayesian method and simple Bayesian method, respectively.

  • PDF

A Word Sense Disambiguation Method with a Semantic Network (의미네트워크를 이용한 단어의미의 모호성 해결방법)

  • DingyulRa
    • Korean Journal of Cognitive Science
    • /
    • v.3 no.2
    • /
    • pp.225-248
    • /
    • 1992
  • In this paper, word sense disambiguation methods utilizing a knowledge base based on a semantic network are introduced. The basic idea is to keep track of a set of paths in the knowledge base which correspond to the inctemental semantic interpretation of a input sentence. These paths are called the semantic paths. when the parser reads a word, the senses of this word which are not involved in any of the semantic paths are removed. Then the removal operation is propagated through the knowledge base to invoke the removal of the senses of other words that have been read before. This removal operation is called recusively as long as senses can be removed. This is called the recursive word sense removal. Concretion of a vague word's concept is one of the important word sense disambiguation methods. We introduce a method called the path adjustment that extends the conctetion operation. How to use semantic association or syntactic processing in coorporation with the above methods is also considered.

Cross-Lingual Text Retrieval Based on a Knowledge Base (지식베이스에 기반한 다언어 문서 검색)

  • Choi, Myeong-Bok;Jo, Jun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.1
    • /
    • pp.21-32
    • /
    • 2010
  • User query formation highly acts on the effectiveness of information retrieval when we retrieve documents from the general domain as a web. This thesis proposes a intelligent information retrieval method based on a cross-lingual knowledge base to effectively perform a cross-lingual text retrieval from the web. The inferred knowledge from the cross-lingual knowledge base helps user's word association to make up user query easily and exactly for effective cross-lingual text information retrieval. This thesis develops user's query reformation algorithm and experiments it with Korean and English web. Experimental results show that the algorithm based on the proposed knowledge base is much more effective than without knowledge base in the cross-lingual text retrieval.

A Text Mining-based Intrusion Log Recommendation in Digital Forensics (디지털 포렌식에서 텍스트 마이닝 기반 침입 흔적 로그 추천)

  • Ko, Sujeong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.2 no.6
    • /
    • pp.279-290
    • /
    • 2013
  • In digital forensics log files have been stored as a form of large data for the purpose of tracing users' past behaviors. It is difficult for investigators to manually analysis the large log data without clues. In this paper, we propose a text mining technique for extracting intrusion logs from a large log set to recommend reliable evidences to investigators. In the training stage, the proposed method extracts intrusion association words from a training log set by using Apriori algorithm after preprocessing and the probability of intrusion for association words are computed by combining support and confidence. Robinson's method of computing confidences for filtering spam mails is applied to extracting intrusion logs in the proposed method. As the results, the association word knowledge base is constructed by including the weights of the probability of intrusion for association words to improve the accuracy. In the test stage, the probability of intrusion logs and the probability of normal logs in a test log set are computed by Fisher's inverse chi-square classification algorithm based on the association word knowledge base respectively and intrusion logs are extracted from combining the results. Then, the intrusion logs are recommended to investigators. The proposed method uses a training method of clearly analyzing the meaning of data from an unstructured large log data. As the results, it complements the problem of reduction in accuracy caused by data ambiguity. In addition, the proposed method recommends intrusion logs by using Fisher's inverse chi-square classification algorithm. So, it reduces the rate of false positive(FP) and decreases in laborious effort to extract evidences manually.

Development of Implicit Memory in Children with Category-Exemplar-Generation Task (아동의 암묵적 기억의 발달 : 개념적 범주생성 과제를 중심으로)

  • Jang, Se Hee;Choi, Kyoung-Sook
    • Korean Journal of Child Studies
    • /
    • v.25 no.6
    • /
    • pp.105-115
    • /
    • 2004
  • The 60 subjects of this study were 3rd, and 6th grade elementary and undergraduate university students. The instrument of 44 items had two typical and two atypical exemplars from 11 semantic categories. Each subject was exposed individually to the word list and asked to categorize each item. At test, subjects generated five items that came to mind in each category. Data was analyzed by 2-way ANOVA, age (3) $\times$ category of typicality (2). All main effects and the interaction effect between age and typicality were significant. There were no significant differences among age groups on typical lists while significant differences between university and elementary school students (Grades 3 and 6) were found on atypical lists. Thus, the knowledge base might be an important factor in implicit memory.

  • PDF

A Global-Interdependence Pairwise Approach to Entity Linking Using RDF Knowledge Graph (개체 링킹을 위한 RDF 지식그래프 기반의 포괄적 상호의존성 짝 연결 접근법)

  • Shim, Yongsun;Yang, Sungkwon;Kim, Hong-Gee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.3
    • /
    • pp.129-136
    • /
    • 2019
  • There are a variety of entities in natural language such as people, organizations, places, and products. These entities can have many various meanings. The ambiguity of entity is a very challenging task in the field of natural language processing. Entity Linking(EL) is the task of linking the entity in the text to the appropriate entity in the knowledge base. Pairwise based approach, which is a representative method for solving the EL, is a method of solving the EL by using the association between two entities in a sentence. This method considers only the interdependence between entities appearing in the same sentence, and thus has a limitation of global interdependence. In this paper, we developed an Entity2vec model that uses Word2vec based on knowledge base of RDF type in order to solve the EL. And we applied the algorithms using the generated model and ranked each entity. In this paper, to overcome the limitations of a pairwise approach, we devised a pairwise approach based on comprehensive interdependency and compared it.

Development of automated scoring system for English writing (영작문 자동 채점 시스템 개발 연구)

  • Jin, Kyung-Ae
    • English Language & Literature Teaching
    • /
    • v.13 no.1
    • /
    • pp.235-259
    • /
    • 2007
  • The purpose of the present study is to develop a prototype automated scoring system for English writing. The system was developed for scoring writings of Korean middle school students. In order to develop the automated scoring system, following procedures have been applied. First, review and analysis of established automated essay scoring systems in other countries have been accomplished. By doing so, we could get the guidance for development of a new sentence-level automated scoring system for Korean EFL students. Second, knowledge base such as lexicon, grammar and WordNet for natural language processing and error corpus of English writing of Korean middle school students were established. Error corpus was established through the paper and pencil test with 589 third year middle school students. This study provided suggestions for the successful introduction of an automated scoring system in Korea. The automated scoring system developed in this study should be continuously upgraded to improve the accuracy of the scoring system. Also, it is suggested to develop an automated scoring system being able to carry out evaluation of English essay, not only sentence-level evaluation. The system needs to be upgraded for the improved precision, but, it was a successful introduction of an sentence-level automated scoring system for English writing in Korea.

  • PDF

A Bibliometric Approach for Department-Level Disciplinary Analysis and Science Mapping of Research Output Using Multiple Classification Schemes

  • Gautam, Pitambar
    • Journal of Contemporary Eastern Asia
    • /
    • v.18 no.1
    • /
    • pp.7-29
    • /
    • 2019
  • This study describes an approach for comparative bibliometric analysis of scientific publications related to (i) individual or several departments comprising a university, and (ii) broader integrated subject areas using multiple disciplinary schemes. It uses a custom dataset of scientific publications (ca. 15,000 articles and reviews, published during 2009-2013, and recorded in the Web of Science Core Collections) with author affiliations to the research departments, dedicated to science, technology, engineering, mathematics, and medicine (STEMM), of a comprehensive university. The dataset was subjected, at first, to the department level and discipline level analyses using the newly available KAKEN-L3 classification (based on MEXT/JSPS Grants-in-Aid system), hierarchical clustering, correspondence analysis to decipher the major departmental and disciplinary clusters, and visualization of the department-discipline relationships using two-dimensional stacked bar diagrams. The next step involved the creation of subsets covering integrated subject areas and a comparative analysis of departmental contributions to a specific area (medical, health and life science) using several disciplinary schemes: Essential Science Indicators (ESI) 22 research fields, SCOPUS 27 subject areas, OECD Frascati 38 subordinate research fields, and KAKEN-L3 66 subject categories. To illustrate the effective use of the science mapping techniques, the same subset for medical, health and life science area was subjected to network analyses for co-occurrences of keywords, bibliographic coupling of the publication sources, and co-citation of sources in the reference lists. The science mapping approach demonstrates the ways to extract information on the prolific research themes, the most frequently used journals for publishing research findings, and the knowledge base underlying the research activities covered by the publications concerned.