• Title/Summary/Keyword: natural language

Search Result 1,518, Processing Time 0.031 seconds

Robust Part-of-Speech Tagger using Statistical and Rule-based Approach (통계와 규칙을 이용한 강인한 품사 태거)

  • Shim, Jun-Hyuk;Kim, Jun-Seok;Cha, Jong-Won;Lee, Geun-Bae
    • Annual Conference on Human and Language Technology
    • /
    • 1999.10d
    • /
    • pp.60-75
    • /
    • 1999
  • 품사 태깅은 자연 언어 처리의 가장 기본이 되는 부분으로 상위 자연 언어 처리 부분인 구문 분석, 의미 분석의 전처리로 사용되고, 독립된 응용으로 언어의 정보를 추출하거나 정보 검색 등의 응용에 사용되어 진다. 품사 태깅은 크게 통계에 기반한 방법, 규칙에 기반한 방법, 이 둘을 모두 이용하는 혼합형 방법 등으로 나누어 연구되고 있다. 포항공대 자연언어처리 연구실의 자연 언어 처리 엔진(SKOPE)의 품사 태깅 시스템 POSTAG는 미등록어 추정이 강화된 혼합형 품사 태깅 시스템이다 본 시스템은 형태소 분석기, 통계적 품사 태거, 에러 수정 규칙 후처리기로 구성되어 있다. 이들은 각각 단순히 직렬 연결되어 있는 것이 아니라 형태소 접속 테이블을 기준으로 분석 과정에서 형태소 접속 그래프를 생성하고 처리하면서 상호 밀접한 연관을 가진다. 그리고, 미등록어용 패턴사전에 의해 등록어와 동일한 방법으로 미등록어를 처리함으로써 효율적이고 강건한 품사 태깅을 한다. 한편, POSTAG에서 사용되는 태그세트와 한국전자통신연구원(ETRI)의 표준 태그세트 간에 양방향으로 태그세트 매핑을 함으로써, 표준 태그세트로 태깅된 코퍼스로부터 POSTAC를 위한 대용량 학습자료를 얻고 POSTAG에서 두 가지 태그세트로 품사 태깅 결과 출력이 가능하다. 본 시스템은 MATEC '99'에서 제공된 30000어절에 대하여 표준 태그세트로 출력한 결과 95%의 형태소단위 정확률을 보였으며, 태그세트 매핑을 제외한 POSTAG의 품사 태깅 결과 97%의 정확률을 보였다.

  • PDF

An Application of RASA Technology to Design an AI Virtual Assistant: A Case of Learning Finance and Banking Terms in Vietnamese

  • PHAM, Thi My Ni;PHAM, Thi Ngoc Thao;NGUYEN, Ha Phuong Truc;LY, Bao Tuyen;NGUYEN, Truc Linh;LE, Hoanh Su
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.9 no.5
    • /
    • pp.273-283
    • /
    • 2022
  • Banking and finance is a broad term that incorporates a variety of smaller, more specialized subjects such as corporate finance, tax finance, and insurance finance. A virtual assistant that assists users in searching for information about banking and finance terms might be an extremely beneficial tool for users. In this study, we explored the process of searching for information, seeking opportunities, and developing a virtual assistant in the first stages of starting learning and understanding Vietnamese to increase effectiveness and save time, which is also an innovative business practice in Use-case Vietnam. We built the FIBA2020 dataset and proposed a pipeline that used Natural Language Processing (NLP) inclusive of Natural Language Understanding (NLU) algorithms to build chatbot applications. The open-source framework RASA is used to implement the system in our study. We aim to improve our model performance by replacing parts of RASA's default tokenizers with Vietnamese tokenizers and experimenting with various language models. The best accuracy we achieved is 86.48% and 70.04% in the ideal condition and worst condition, respectively. Finally, we put our findings into practice by creating an Android virtual assistant application using the model trained using Whitespace tokenizer and the pre-trained language m-BERT.

A Simple Syntax for Complex Semantics

  • Lee, Kiyong
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2002.02a
    • /
    • pp.2-27
    • /
    • 2002
  • As pact of a long-ranged project that aims at establishing database-theoretic semantics as a model of computational semantics, this presentation focuses on the development of a syntactic component for processing strings of words or sentences to construct semantic data structures. For design arid modeling purposes, the present treatment will be restricted to the analysis of some problematic constructions of Korean involving semi-free word order, conjunction arid temporal anchoring, and adnominal modification and antecedent binding. The present work heavily relies on Hausser's (1999, 2000) SLIM theory for language that is based on surface compositionality, time-linearity arid two other conditions on natural language processing. Time-linear syntax for natural language has been shown to be conceptually simple and computationally efficient. The associated semantics is complex, however, because it must deal with situated language involving interactive multi-agents. Nevertheless, by processing input word strings in a time-linear mode, the syntax cart incrementally construct the necessary semantic structures for relevant queries and valid inferences. The fragment of Korean syntax will be implemented in Malaga, a C-type implementation language that was enriched for both programming and debugging purposes arid that was particluarly made suitable for implementing in Left-Associative Grammar. This presentation will show how the system of syntactic rules with constraining subrules processes Korean sentences in a step-by-step time-linear manner to incrementally construct semantic data structures that mainly specify relations with their argument, temporal, and binding structures.

  • PDF

A Study on the Interdisciplinary Communication Patterns (학문분야간의 코뮤니케이션 유형)

  • Kim Yong Sung
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.18
    • /
    • pp.99-127
    • /
    • 1990
  • This study attempts to verify the hypothesis that the interdisciplinary communication patterns may be different according to each disciplines. To put it concretely, it is intended to analyze and compare the subject distribution, the format, the age, the origin and the language of the documents of other disciplines cited in the journal articles according to each disciplines. To test the hypothesis philosophy, sociology and physics as the sample for the three disciplines, that is, humanities, social sciences, and natural sciences are sellected, and the documents cited in the journal articles published in 1966, 1971. 1976, 1981 and 1986 by the Korean Philosophical Association, the Korean Sociological Association and the Korean Physical Society are collected. And then the subject distribution, the format, the age, the origin, the language, and their use rate of the documents of other disciplines cited in the journal articles are investigated, analyzed and compared according to each disciplines. Some findings and conclusions made in the study are as follows. 1. The subject distribution of documents of other disciplines cited and its distribution ratio are different according to each disciplines, that is, humanities high, natural sciences low and social sciences medium. 2. The format and the use rate of documents of other disciplines cited are different according to each disciplines. In the three disciplines book and journal are more used than any format of documents in interdisciplinary communication while in case of the humanities and social sciences book is more used than journal, and in case of the natural sciences journal is more used than book in that communication. 3. The age and the use rate of the cited documents of other disciplines are different according to each disciplines. In case of the social sciences and natural sciences the documents of its last 20 years of publication are cited concentrately, and in case of the humanities the literature age is unconcerned. 4. The origin and the language of the cited documents of other disciplines, and its use rate are different according to each disciplines. In the humanities and natural sciences the documents published in foreign country are cited concentrately, and in the social sciences the home publication documents are more cited than the foreign. The documents of other disciplines in English language are most cited among the documents in any foreign languages in interdisciplinary communication. Putting the three disciplines in order of the use rate of the documents in English language, the natural sciences is high, the humanities medium, and the social sciences low. In the social sciences the use rate of the documents of other disciplines in Korean language is high while in the humanities and natural sciences slight.

  • PDF

Features of an Error Correction Memory to Enhance Technical Texts Authoring in LELIE

  • SAINT-DIZIER, Patrick
    • International Journal of Knowledge Content Development & Technology
    • /
    • v.5 no.2
    • /
    • pp.75-101
    • /
    • 2015
  • In this paper, we investigate the notion of error correction memory applied to technical texts. The main purpose is to introduce flexibility and context sensitivity in the detection and the correction of errors related to Constrained Natural Language (CNL) principles. This is realized by enhancing error detection paired with relatively generic correction patterns and contextual correction recommendations. Patterns are induced from previous corrections made by technical writers for a given type of text. The impact of such an error correction memory is also investigated from the point of view of the technical writer's cognitive activity. The notion of error correction memory is developed within the framework of the LELIE project an experiment is carried out on the case of fuzzy lexical items and negation, which are both major problems in technical writing. Language processing and knowledge representation aspects are developed together with evaluation directions.

Implementation of Korean TTS System based on Natural Language Processing (자연어 처리 기반 한국어 TTS 시스템 구현)

  • Kim Byeongchang;Lee Gary Geunbae
    • MALSORI
    • /
    • no.46
    • /
    • pp.51-64
    • /
    • 2003
  • In order to produce high quality synthesized speech, it is very important to get an accurate grapheme-to-phoneme conversion and prosody model from texts using natural language processing. Robust preprocessing for non-Korean characters should also be required. In this paper, we analyzed Korean texts using a morphological analyzer, part-of-speech tagger and syntactic chunker. We present a new grapheme-to-phoneme conversion method for Korean using a hybrid method with a phonetic pattern dictionary and CCV (consonant vowel) LTS (letter to sound) rules, for unlimited vocabulary Korean TTS. We constructed a prosody model using a probabilistic method and decision tree-based method. The probabilistic method atone usually suffers from performance degradation due to inherent data sparseness problems. So we adopted tree-based error correction to overcome these training data limitations.

  • PDF

Design of On-Line Natural Language Parser (온라인 방식의 자연언어 해석기 설계)

  • 우요섭;최병욱
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.3
    • /
    • pp.14-23
    • /
    • 1994
  • A natural language processing system usually has the demerit that its processing time is relatively long. If an interactive system makes its user kept waiting long, it can't be said to be practical. In this paper, the on-line natural language parser in which its processing coincides with the sentence's inputting is designed. Since the greater part of morpholgical and syntatic semantic analysis is already performed during the keyboard input, user can get a prompt response. Moreover, the Korean parser is implemented in multitasking environment, and it is compared with an off-line parser. The on-line parser can be considered to be efficient for its real time processing.

  • PDF

Automatic Word Spacer based on Syllable Bi-gram Model using Word Spacing Information of an Input Sentence (입력 문장의 띄어쓰기를 고려한 음절 바이그램 띄어쓰기 모델)

  • Cho, Han-Cheol;Lee, Do-Gil;Rim, Hae-Chang
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2006.06a
    • /
    • pp.67-71
    • /
    • 2006
  • 현재까지 제안된 자동 띄어쓰기 교정 모델들은 그 중의 대다수가 입력 문장에서 공백을 제거한 후에 교정 작업을 수행한다. 이러한 교정 방식은 입력 문장의 띄어쓰기가 잘 되어 있는 경우에 입력 문장보다 좋지 못한 교정 문장을 생성하는 경우가 있다. 본 논문에서는 이러한 문제점을 해결하기 위하여 입력 문장의 띄어쓰기를 고려한 자동 띄어쓰기 교정모델을 제안한다. 이 모델은 입력 문장의 음절단위 띄어쓰기 오류가 5%일 때 약 8%의 성능 향상을 보였으며, 10%의 오류가 존재할 때 약 5%의 성능 향상을 보였다.

  • PDF

The Definition of Data Structure for Design Knowledge Database and Development of the Interface Program for using Natural Language Processing (설계지식 데이터베이스의 자료구조 규명과 자연어처리를 이용한 인터페이스 프로그램 개발)

  • 이정재;이민호;윤성수
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.43 no.6
    • /
    • pp.187-196
    • /
    • 2001
  • In this study, by using the natural language processing of the field of artificial intelligence, automated index was performed. And then, the Natural Language Processing Interface for knowledge representation(NALPI) has been developed. Furthermore, the DEsign KnOwledge DataBase(DEKODB) has been also developed, which is designed to interlock the knowledge base. The DEKODB processes both the documented design-data, like a concrete standard specification, and the design knowledge from an expert. The DEKODB is also simulates the design space of structures accordance with the production rule, and thus it is determined that DEKODB can be used as a engine to retrieve new knowledge and to implement knowledge base that is necessary to the development of automatic design system. The application field of the system, which has been developed in this study, can be expanded by supplement of the design knowledge at DEKODB and developing dictionaries for foreign languages. Furthermore, the perfect automation at the data accumulation and development of the automatic rule generator should benefit the unified design automation.

  • PDF