• 제목/요약/키워드: natural language process

검색결과 247건 처리시간 0.028초

집단지성을 이용한 한글 감성어 사전 구축 (Building a Korean Sentiment Lexicon Using Collective Intelligence)

  • 안정국;김희웅
    • 지능정보연구
    • /
    • 제21권2호
    • /
    • pp.49-67
    • /
    • 2015
  • 최근 다양한 분야에서 빅데이터의 활용과 분석에 대한 중요성이 대두됨에 따라, 뉴스기사와 댓글과 같은 비정형 데이터의 자연어 처리 기술에 기반한 감성 분석에 대한 관심이 높아지고 있다. 하지만, 한국어는 영어와는 달리 자연어 처리가 어려운 교착어로써 정보화나 정보시스템에의 활용이 미흡한 실정이다. 이에 본 연구는 감성 분석에 활용이 가능한 감성어 사전을 집단지성으로 구축하였고, 누구나 연구와 실무에 사용하도록 API서비스 플랫폼을 개방하였다(www.openhangul.com). 집단지성의 활용을 위해 국내 최대 대학생 소셜네트워크 사이트에서 대학생들을 대상으로 단어마다 긍정, 중립, 부정에 대한 투표를 진행하였다. 그리고 집단지성의 효율성을 높이기 위해 감성을 '정의'가 아닌 '분류'하는 방식인 폭소노미의 '사람들에 의한 분류법'이라는 개념을 적용하였다. 총 517,178(+)의 국어사전 단어 중 불용어 형태를 제외한 후 감성 표현이 가능한 명사, 형용사, 동사, 부사를 우선 순위로 하여, 현재까지 총 35,000(+)번의 단어에 대한 투표를 진행하였다. 본 연구의 감성어 사전은 집단지성의 참여자가 누적됨에 따라 신뢰도가 높아지도록 설계하여, 시간을 축으로 사람들이 단어에 대해 인지하는 감성의 변화도 섬세하게 반영하는 장점이 있다. 따라서 본 연구는 앞으로도 감성어 사전 구축을 위한 투표를 계속 진행할 예정이며, 현재 제공하고 있는 감성어 사전, 기본형 추출, 카테고리 추출 외에도 다양한 자연어 처리에 응용이 가능한 API들도 제공할 계획이다. 기존의 연구들이 감성 분석이나 감성어 사전의 구축과 활용에 대한 방안을 제안하는 것에만 한정되어 있는 것과는 달리, 본 연구는 집단지성을 실제로 활용하여 연구와 실무에 활용이 가능한 자원을 구축하여 개방하여 공유한다는 차별성을 가지고 있다. 더 나아가, 집단지성과 폭소노미의 특성을 결합하여 한글 감성어 사전을 구축한 새로운 시도가 향후 한글 자연어 처리의 발전에 있어 다양한 분야들의 융합적인 연구와 실무적인 참여를 이끌어 개방적 협업의 새로운 방향과 시사점을 제시 할 수 있을 것이라 기대한다.

해양사고 예방을 위한 사전학습 언어모델의 순차적 레이블링 기반 복수 인과관계 추출 (Sequence Labeling-based Multiple Causal Relations Extraction using Pre-trained Language Model for Maritime Accident Prevention)

  • 문기영;김도현;양태훈;이상덕
    • 한국안전학회지
    • /
    • 제38권5호
    • /
    • pp.51-57
    • /
    • 2023
  • Numerous studies have been conducted to analyze the causal relationships of maritime accidents using natural language processing techniques. However, when multiple causes and effects are associated with a single accident, the effectiveness of extracting these causal relations diminishes. To address this challenge, we compiled a dataset using verdicts from maritime accident cases in this study, analyzed their causal relations, and applied labeling considering the association information of various causes and effects. In addition, to validate the efficacy of our proposed methodology, we fine-tuned the KoELECTRA Korean language model. The results of our validation process demonstrated the ability of our approach to successfully extract multiple causal relationships from maritime accident cases.

Embedded Distributivity

  • Joh, Yoon-Kyoung
    • 한국언어정보학회지:언어와정보
    • /
    • 제14권2호
    • /
    • pp.17-32
    • /
    • 2010
  • Distributivity has been one of the central topics in formal semantics. However, no due attention has been paid to embedded distributivity that very frequently occurs in natural languages. In this paper, I propose a formal analysis for embedded distributivity. In analyzing embedded distributivity, I employ no complicated mechanisms but pluralization. Since distributivity is reduced to plurality as Landman (2000) argues, employing plural formation is not an ad hoc approach to embedded distributivity. That is, the plural variable inserted in the process of deriving embedded distributivity is motivated in a principled manner since the pluralization occurs inside a pluralization operator. Moreover, I point out that the plural variable made available is not restricted to entities.

  • PDF

구문구조 Matrix에 의한 한국어의 수식구조와 개념구조의 해석 (An Analysis of the Korean Modificatory and Conceptual Structure by a Syntactic Matrix)

  • 한광록;최장선;이주근
    • 대한전자공학회논문지
    • /
    • 제25권12호
    • /
    • pp.1639-1648
    • /
    • 1988
  • This paper deals with an analyzing method of the Korean syntax to implement a natural language understanding system. A matrix of the syntactic structure is derived by the structural features of the Korean language. The modificatoty and conceptual structures are extracted from the matrix and the predicate logic form is expressed by extracting the phrase, clause and conceptual structure in the analyzing process. This logic form constructs an knowledge base of the sentence and proposes the possibility of the inference.

  • PDF

Extracting Ontology from Medical Documents with Ontology Maturing Process

  • Nyamsuren, Enkhbold;Kang, Dong-Yeop;Kim, Su-Kyoung;Choi, Ho-Jin
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2009년도 춘계학술발표대회
    • /
    • pp.50-52
    • /
    • 2009
  • Ontology maintenance is a time consuming and costly process which requires special skill and knowledge. It requires joint effort of both ontology engineer and domain specialist to properly maintain ontology and update knowledge in it. This is specially true for medical domain which is highly specialized domain. This paper proposes a novel approach for maintenance and update of existing ontologies in a medical domain. The proposed approach is based on modified Ontology Maturing Process which was originally developed for web domain. The proposed approach provides way to populate medical ontology with new knowledge obtained from medical documents. This is achieved through use of natural language processing techniques and highly specialized medical knowledge bases such as Unified Medical Language System.

A Computational Model of Language Learning Driven by Training Inputs

  • 이은석;이지훈;장병탁
    • 한국인지과학회:학술대회논문집
    • /
    • 한국인지과학회 2010년도 춘계학술대회
    • /
    • pp.60-65
    • /
    • 2010
  • Language learning involves linguistic environments around the learner. So the variation in training input to which the learner is exposed has been linked to their language learning. We explore how linguistic experiences can cause differences in learning linguistic structural features, as investigate in a probabilistic graphical model. We manipulate the amounts of training input, composed of natural linguistic data from animation videos for children, from holistic (one-word expression) to compositional (two- to six-word one) gradually. The recognition and generation of sentences are a "probabilistic" constraint satisfaction process which is based on massively parallel DNA chemistry. Random sentence generation tasks succeed when networks begin with limited sentential lengths and vocabulary sizes and gradually expand with larger ones, like children's cognitive development in learning. This model supports the suggestion that variations in early linguistic environments with developmental steps may be useful for facilitating language acquisition.

  • PDF

GNI Corpus Version 1.0: Annotated Full-Text Corpus of Genomics & Informatics to Support Biomedical Information Extraction

  • Oh, So-Yeon;Kim, Ji-Hyeon;Kim, Seo-Jin;Nam, Hee-Jo;Park, Hyun-Seok
    • Genomics & Informatics
    • /
    • 제16권3호
    • /
    • pp.75-77
    • /
    • 2018
  • Genomics & Informatics (NLM title abbreviation: Genomics Inform) is the official journal of the Korea Genome Organization. Text corpus for this journal annotated with various levels of linguistic information would be a valuable resource as the process of information extraction requires syntactic, semantic, and higher levels of natural language processing. In this study, we publish our new corpus called GNI Corpus version 1.0, extracted and annotated from full texts of Genomics & Informatics, with NLTK (Natural Language ToolKit)-based text mining script. The preliminary version of the corpus could be used as a training and testing set of a system that serves a variety of functions for future biomedical text mining.

딥러닝 중심의 자연어 처리 기술 현황 분석 (Analysis of the Status of Natural Language Processing Technology Based on Deep Learning)

  • 박상언
    • 한국빅데이터학회지
    • /
    • 제6권1호
    • /
    • pp.63-81
    • /
    • 2021
  • 자연어 처리는 최근 기계학습 및 딥러닝 기술의 발전과 적용으로 성능이 빠르게 향상되고 있으며, 이로 인해 활용 분야도 넓어지고 있다. 특히 비정형 텍스트 데이터에 대한 분석 요구가 증가함에 따라 자연어 처리에 대한 관심도 더욱 높아지고 있다. 그러나 자연어 전처리 과정 및 기계학습과 딥러닝 이론의 복잡함과 어려움으로 인해 아직도 자연어 처리 활용의 장벽이 높은 편이다. 본 논문에서는 자연어 처리의 전반적인 이해를 위해 현재 활발히 연구되고 있는 자연어 처리의 주요 분야와 기계학습 및 딥러닝을 중심으로 한 주요 기술의 현황에 대해 살펴봄으로써, 보다 쉽게 자연어 처리에 대해 이해하고 활용할 수 있는 기반을 제공하고자 한다. 이를 위해 인공지능 기술 분류체계의 변화를 통해 자연어 처리의 비중 및 변화 과정을 살펴보았으며, 기계학습과 딥러닝을 기반으로 한 자연어 처리 주요 분야를 언어 모델, 문서 분류, 문서 생성, 문서 요약, 질의응답, 기계번역으로 나누어 정리하고 각 분야에서 가장 뛰어난 성능을 보이는 모형들을 살펴보았다. 그리고, 자연어 처리에서 활용되고 있는 주요 딥러닝 모형들에 대해 정리하고 자연어 처리 분야에서 사용되는 데이터셋과 성능평가를 위한 평가지표에 대해 정리하였다. 본 논문을 통해, 자연어 처리를 자신의 분야에서 다양한 목적으로 활용하고자 하는 연구자들이 자연어 처리의 전반적인 기술 현황에 대해 이해하고, 자연어 처리의 주요 기술 분야와 주로 사용되는 딥러닝 모형 및 데이터셋과 평가지표에 대해 보다 쉽게 파악할 수 있기를 기대한다.

Part-of-speech Tagging for Hindi Corpus in Poor Resource Scenario

  • Modi, Deepa;Nain, Neeta;Nehra, Maninder
    • Journal of Multimedia Information System
    • /
    • 제5권3호
    • /
    • pp.147-154
    • /
    • 2018
  • Natural language processing (NLP) is an emerging research area in which we study how machines can be used to perceive and alter the text written in natural languages. We can perform different tasks on natural languages by analyzing them through various annotational tasks like parsing, chunking, part-of-speech tagging and lexical analysis etc. These annotational tasks depend on morphological structure of a particular natural language. The focus of this work is part-of-speech tagging (POS tagging) on Hindi language. Part-of-speech tagging also known as grammatical tagging is a process of assigning different grammatical categories to each word of a given text. These grammatical categories can be noun, verb, time, date, number etc. Hindi is the most widely used and official language of India. It is also among the top five most spoken languages of the world. For English and other languages, a diverse range of POS taggers are available, but these POS taggers can not be applied on the Hindi language as Hindi is one of the most morphologically rich language. Furthermore there is a significant difference between the morphological structures of these languages. Thus in this work, a POS tagger system is presented for the Hindi language. For Hindi POS tagging a hybrid approach is presented in this paper which combines "Probability-based and Rule-based" approaches. For known word tagging a Unigram model of probability class is used, whereas for tagging unknown words various lexical and contextual features are used. Various finite state machine automata are constructed for demonstrating different rules and then regular expressions are used to implement these rules. A tagset is also prepared for this task, which contains 29 standard part-of-speech tags. The tagset also includes two unique tags, i.e., date tag and time tag. These date and time tags support all possible formats. Regular expressions are used to implement all pattern based tags like time, date, number and special symbols. The aim of the presented approach is to increase the correctness of an automatic Hindi POS tagging while bounding the requirement of a large human-made corpus. This hybrid approach uses a probability-based model to increase automatic tagging and a rule-based model to bound the requirement of an already trained corpus. This approach is based on very small labeled training set (around 9,000 words) and yields 96.54% of best precision and 95.08% of average precision. The approach also yields best accuracy of 91.39% and an average accuracy of 88.15%.

Bi-directional Maximal Matching Algorithm to Segment Khmer Words in Sentence

  • Mao, Makara;Peng, Sony;Yang, Yixuan;Park, Doo-Soon
    • Journal of Information Processing Systems
    • /
    • 제18권4호
    • /
    • pp.549-561
    • /
    • 2022
  • In the Khmer writing system, the Khmer script is the official letter of Cambodia, written from left to right without a space separator; it is complicated and requires more analysis studies. Without clear standard guidelines, a space separator in the Khmer language is used inconsistently and informally to separate words in sentences. Therefore, a segmented method should be discussed with the combination of the future Khmer natural language processing (NLP) to define the appropriate rule for Khmer sentences. The critical process in NLP with the capability of extensive data language analysis necessitates applying in this scenario. One of the essential components in Khmer language processing is how to split the word into a series of sentences and count the words used in the sentences. Currently, Microsoft Word cannot count Khmer words correctly. So, this study presents a systematic library to segment Khmer phrases using the bi-directional maximal matching (BiMM) method to address these problematic constraints. In the BiMM algorithm, the paper focuses on the Bidirectional implementation of forward maximal matching (FMM) and backward maximal matching (BMM) to improve word segmentation accuracy. A digital or prefix tree of data structure algorithm, also known as a trie, enhances the segmentation accuracy procedure by finding the children of each word parent node. The accuracy of BiMM is higher than using FMM or BMM independently; moreover, the proposed approach improves dictionary structures and reduces the number of errors. The result of this study can reduce the error by 8.57% compared to FMM and BFF algorithms with 94,807 Khmer words.