• Title/Summary/Keyword: 규칙 자동 구축

Search Result 132, Processing Time 0.024 seconds

Derivational Morphology in a Tagged Corpus (형태소 분석 말뭉치의 파생명사 처리)

  • Cha, Joon-Kyung;Kang, Beom-Mo
    • Annual Conference on Human and Language Technology
    • /
    • 2000.10d
    • /
    • pp.390-394
    • /
    • 2000
  • 이 논문은 형태소 분석 말뭉치를 구축하면서 제기되었던 파생 명사 처리의 문제점을 논의하고 그 해결 방안을 모색한 것이다. 파생 명사의 분석에서 국어학적 전산 언어학적으로 유의미한 분석 범위를 정할 때 몇가지 고려해야 할 사항이 있다. 접두사는 어기가 불규칙하므로 규칙으로 자동처리가 어렵다. 형태소 분석의 대상은 생산성이 높고, 어기와 범주를 변화시키는 서술성 접두사로 그 범위를 정할 수 있을 것이다. 접미사의 분석은 생산성이 높고 규칙적인 굴절 접미사가 분석의 대상이 되며, 또한 서술성을 갖는 한자어계 접미사도 분석 대상이 된다. 파생 명사의 분석에 있어서 접사는 그 위상이 동요되므로 접두사는 관형사와 구별이 어렵고 접미사는 의존명사와 구별이 어렵다. 그러므로, 대용량의 형태소 분석 말뭉치를 효율적으로 구축하기 위해서는 접사에 대한 다각적인 검토가 필요할 것이다.

  • PDF

Automatic Control for Ship Automatic Collision Avoidance Support (선박자동충돌회피지원을 위한 자동제어)

  • 임남균
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2003.05a
    • /
    • pp.81-86
    • /
    • 2003
  • The studies on automatic ship collision avoidance system, which have been carried out last 10 years, are facing on new situation due to newly developed high technology such as computer and other information system. It was almost impossible to make it used in real navigation 3-4 years ago because of the absence of the tool to get other ship's information, however recently developed technology suggests new possibility. This study is carried out to develop the algorithm of automatic ship collision support system. The NOMOTO ship's mathematic model is adopted in simulation for its simplicity. The fuzzy reason rules are used for course-keeping system and for the calculation of Collision Risk using TCPA/DCPA. Moreover‘encounter type’ between two ships is analyzed based on Regulations for Preventing Collisions at Sea and collision avoidance action is suggested, Some situations are simulated to verity the developed algorithm and appropriate avoidance action is shown in the simulation.

  • PDF

The Automatic Extraction of Hypernyms and the Development of WordNet Prototype for Korean Nouns using Korean MRD (Machine Readable Dictionary) (국어사전을 이용한 한국어 명사에 대한 상위어 자동 추출 및 WordNet의 프로토타입 개발)

  • Kim, Min-Soo;Kim, Tae-Yeon;Noh, Bong-Nam
    • The Transactions of the Korea Information Processing Society
    • /
    • v.2 no.6
    • /
    • pp.847-856
    • /
    • 1995
  • When a human recognizes nouns in a sentence, s/he associates them with the hyper concepts of onus. For computer to simulate the human's word recognition, it should build the knowledge base (WordNet)for the hyper concepts of words. Until now, works for the WordNet haven't been performed in Korea, because they need lots of human efforts and time. But, as the power of computer is radically improved and common MRD becomes available, it is more feasible to automatically construct the WordNet. This paper proposes the method that automatically builds the WordNet of Korean nouns by using the descripti on of onus in Korean MRD, and it proposes the rules for extracting the hyper concepts (hypernyms)by analyzing structrual characteristics of Korean. The rules effect such characteristics as a headword lies on the rear part of sentences and the descriptive sentences of nouns have special structure. In addition, the WordNet prototype of Korean Nouns is developed, which is made by combining the hypernyms produced by the rules mentioned above. It extracts the hypernyms of about 2,500 sample words, and the result shows that about 92per cents of hypernyms are correct.

  • PDF

Network Analysis between Uncertainty Words based on Word2Vec and WordNet (Word2Vec과 WordNet 기반 불확실성 단어 간의 네트워크 분석에 관한 연구)

  • Heo, Go Eun
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.53 no.3
    • /
    • pp.247-271
    • /
    • 2019
  • Uncertainty in scientific knowledge means an uncertain state where propositions are neither true or false at present. The existing studies have analyzed the propositions written in the academic literature, and have conducted the performance evaluation based on the rule based and machine learning based approaches by using the corpus. Although they recognized that the importance of word construction, there are insufficient attempts to expand the word by analyzing the meaning of uncertainty words. On the other hand, studies for analyzing the structure of networks by using bibliometrics and text mining techniques are widely used as methods for understanding intellectual structure and relationship in various disciplines. Therefore, in this study, semantic relations were analyzed by applying Word2Vec to existing uncertainty words. In addition, WordNet, which is an English vocabulary database and thesaurus, was applied to perform a network analysis based on hypernyms, hyponyms, and synonyms relations linked to uncertainty words. The semantic and lexical relationships of uncertainty words were structurally identified. As a result, we identified the possibility of automatically expanding uncertainty words.

Korean Abbreviation Generation using Sequence to Sequence Learning (Sequence-to-sequence 학습을 이용한 한국어 약어 생성)

  • Choi, Su Jeong;Park, Seong-Bae;Kim, Kweon-Yang
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.3
    • /
    • pp.183-187
    • /
    • 2017
  • Smart phone users prefer fast reading and texting. Hence, users frequently use abbreviated sequences of words and phrases. Nowadays, abbreviations are widely used from chat terms to technical terms. Therefore, gathering abbreviations would be helpful to many services, including information retrieval, recommendation system, and so on. However, manually gathering abbreviations needs to much effort and cost. This is because new abbreviations are continuously generated whenever a new material such as a TV program or a phenomenon is made. Thus it is required to generate of abbreviations automatically. To generate Korean abbreviations, the existing methods use the rule-based approach. The rule-based approach has limitations, in that it is unable to generate irregular abbreviations. Another problem is to decide the correct abbreviation among candidate abbreviations generated rules. To address the limitations, we propose a method of generating Korean abbreviations automatically using sequence-to-sequence learning in this paper. The sequence-to-sequence learning can generate irregular abbreviation and does not lead to the problem of deciding correct abbreviation among candidate abbreviations. Accordingly, it is suitable for generating Korean abbreviations. To evaluate the proposed method, we use dataset of two type. As experimental results, we prove that our method is effective for irregular abbreviations.

Automatic Generation of Bibliographic Metadata with Reference Information for Academic Journals (학술논문 내에서 참고문헌 정보가 포함된 서지 메타데이터 자동 생성 연구)

  • Jeong, Seonki;Shin, Hyeonho;Ji, Seon-Yeong;Choi, Sungphil
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.56 no.3
    • /
    • pp.241-264
    • /
    • 2022
  • Bibliographic metadata can help researchers effectively utilize essential publications that they need and grasp academic trends of their own fields. With the manual creation of the metadata costly and time-consuming. it is nontrivial to effectively automatize the metadata construction using rule-based methods due to the immoderate variety of the article forms and styles according to publishers and academic societies. Therefore, this study proposes a two-step extraction process based on rules and deep neural networks for generating bibliographic metadata of scientific articlles to overcome the difficulties above. The extraction target areas in articles were identified by using a deep neural network-based model, and then the details in the areas were analyzed and sub-divided into relevant metadata elements. IThe proposed model also includes a model for generating reference summary information, which is able to separate the end of the text and the starting point of a reference, and to extract individual references by essential rule set, and to identify all the bibliographic items in each reference by a deep neural network. In addition, in order to confirm the possibility of a model that generates the bibliographic information of academic papers without pre- and post-processing, we conducted an in-depth comparative experiment with various settings and configurations. As a result of the experiment, the method proposed in this paper showed higher performance.

Clustering and Pattern Analysis for Building Semantic Ontologies in RESTful Web Services (RESTful 웹 서비스에서 시맨틱 온톨로지를 구축하기 위한 클러스터링 및 패턴 분석 기법)

  • Lee, Yong-Ju
    • Journal of Internet Computing and Services
    • /
    • v.12 no.4
    • /
    • pp.119-133
    • /
    • 2011
  • With the advent of Web 2.0, the use of RESTful web services is expected to overtake that of the traditional SOAP-based web services. Recently, the growing number of RESTful web services available on the web raises the challenging issue of how to locate the desired web services. However, the existing keyword searching method is insufficient for the bad recall and the bad precision. In this paper, we propose a novel building semantic ontology method which employs both the clustering technique based on association rules and the semantic analysis technique based on patterns. From this method, we can generate ontologies automatically, reduce the burden of semantic annotations, and support more efficient web services search. We ran our experiments on the subset of 168 RESTful web services downloaded from the PregrammableWeb site. The experimental results show that our method achieves up to 35% improvement for recall performance, and up to 18% for precision performance compared to the existing keyword searching method.

Building an Automated Scoring System for a Single English Sentences (단문형의 영작문 자동 채점 시스템 구축)

  • Kim, Jee-Eun;Lee, Kong-Joo;Jin, Kyung-Ae
    • The KIPS Transactions:PartB
    • /
    • v.14B no.3 s.113
    • /
    • pp.223-230
    • /
    • 2007
  • The purpose of developing an automated scoring system for English composition is to score the tests for writing English sentences and to give feedback on them without human's efforts. This paper presents an automated system to score English composition, whose input is a single sentence, not an essay. Dealing with a single sentence as an input has some advantages on comparing the input with the given answers by human teachers and giving detailed feedback to the test takers. The system has been developed and tested with the real test data collected through English tests given to the third grade students in junior high school. Two steps of the process are required to score a single sentence. The first process is analyzing the input sentence in order to detect possible errors, such as spelling errors, syntactic errors and so on. The second process is comparing the input sentence with the given answer to identify the differences as errors. The results produced by the system were then compared with those provided by human raters.

Three-Phase English Syntactic Analysis for Improving the Parsing Efficiency (영어 구문 분석의 효율 개선을 위한 3단계 구문 분석)

  • Kim, Sung-Dong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.1
    • /
    • pp.21-28
    • /
    • 2016
  • The performance of an English-Korean machine translation system depends heavily on its English parser. The parser in this paper is a part of the rule-based English-Korean MT system, which includes many syntactic rules and performs the chart-based parsing. The parser generates too many structures due to many syntactic rules, so much time and memory are required. The rule-based parser has difficulty in analyzing and translating the long sentences including the commas because they cause high parsing complexity. In this paper, we propose the 3-phase parsing method with sentence segmentation to efficiently translate the long sentences appearing in usual. Each phase of the syntactic analysis applies its own independent syntactic rules in order to reduce parsing complexity. For the purpose, we classify the syntactic rules into 3 classes and design the 3-phase parsing algorithm. Especially, the syntactic rules in the 3rd class are for the sentence structures composed with commas. We present the automatic rule acquisition method for 3rd class rules from the syntactic analysis of the corpus, with which we aim to continuously improve the coverage of the parsing. The experimental results shows that the proposed 3-phase parsing method is superior to the prior parsing method using only intra-sentence segmentation in terms of the parsing speed/memory efficiency with keeping the translation quality.

Mapping Heterogenous Hierarchical Concept Classifications for the HLP Applications -A case of Sejong Semantic Classes and KorLexNoun 1.5- (인간언어공학에의 활용을 위한 이종 개념체계 간 사상 -세종의미부류와 KorLexNoun 1.5-)

  • Bae, Sun-Mee;Im, Kyoungup;Yoon, Aesun
    • Annual Conference on Human and Language Technology
    • /
    • 2009.10a
    • /
    • pp.6-13
    • /
    • 2009
  • 본 연구에서는 인간언어공학에서의 활용을 위해 세종전자사전의 의미부류와 KorLexNoun 1.5의 상위노드 간의 사상을 목표로 전문가의 수작업에 의한 세밀한 사상 방법론(fine-grained mapping method)을 제안한다. 또한 이질적인 두 이종 자원 간의 사상에 있어 각 의미체계의 이질성으로 인해 발생하는 여러 가지 문제점을 살펴보고, 그 해결방안을 제안한다. 본 연구는 세종의미부류체계가 밝히고자 했던 한국어의 의미구조와, Prinston WordNet을 참조로 하여 KorLexNoun에 여전히 영향을 미치고 있는 영어 의미구조를 비교함으로써 공통점과 차이점을 파악할 수 있고, 이를 바탕으로 언어 독립적인 개념체계를 구축하는 데 기여할 수 있다. 또한 향후 KorLex의 용언에 기술되어 있는 문형정보와 세종 전자사전의 용언의 격틀 정보를 통합 구축하여 구문분석에서 이용할 때, 세종 의미부류와 KorLexNoun의 상위노드를 통합 구축함으로써 논항의 일반화된 선택제약규칙의 기술에서 이용될 수 있다. 본 연구에서 제안된 사상방법론은 향후 이종 자원의 자동 사상 연구에서도 크게 기여할 것이다. 아울러 두 이종 자원의 사상을 통해 두 의미체계가 지닌 장점을 극대화하고, 동시에 단점을 상호 보완하여 보다 완전한 언어자원으로써 구문분석이나 의미분석에서 이용될 수 있다.

  • PDF