• Title/Summary/Keyword: named entity recognition

Search Result 157, Processing Time 0.031 seconds

English-Korean Cross-lingual Link Discovery Using Link Probability and Named Entity Recognition (링크확률과 개체명 인식을 이용한 영-한 교차언어 링크 탐색)

  • Kang, Shin-Jae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.3
    • /
    • pp.191-195
    • /
    • 2013
  • This paper proposes an automatic method for discovering cross-lingual links from English Wikipedia documents to Korean ones in order to increase connectivity among vast web resources. Compared to the existing methods roughly estimating link probability of phrases, candidate anchors are selected from English documents by using various information such as title lists and linking probability extracted from Wikipedia dumps and the results of named-entity recognition, and the anchors are translated into Korean words, and then the most suitable Korean documents with the words are selected as cross-lingual links. The experimental results showed 0.375 of MAP.

Development of Tagging Dataset for Named Entity Recognition in Security (정보보안 분야의 위협정보 개체명 인식 시스템 개발을 위한 데이터셋 구축)

  • Kim, GyeongMin;Hur, YunA;Kim, Kuekyeng;Lim, HeuiSeok
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.669-671
    • /
    • 2018
  • 개체명 인식(Named Entity Recognition)은 주로 인명(PS), 지명(LC), 기관명(OG) 등의 개체를 인식하기 위한 방식으로 많이 사용되어왔다. 그 이유는 해당 개체들이 데이터에서 중요한 의미를 가진 키워드이기 때문이다. 그러나 다른 도메인이 달라진다면 그동안 사용된 개체보다 더욱 중요한 의미를 갖는 개체가 존재할 수 있다. 특히 정보보안 분야에서는 악의적으로 사용되는 위협정보가 문서 내에서 중요한 의미를 갖는다. 보안 문서는 해시값, 악성코드명, IP, 도메인/URL 등 위협정보에 중요한 단서가 될 수 있는 다양한 정보를 담고 있다. 본 논문에서는 정보보안 분야의 위협정보를 탐지할 수 있는 개체명 시스템 개발을 위해 4개의 클래스와 20가지 속성으로 정의한 구축 방식을 구축하고 그 구축 방식에 대해 제안한다.

  • PDF

Comparative study of text representation and learning for Persian named entity recognition

  • Pour, Mohammad Mahdi Abdollah;Momtazi, Saeedeh
    • ETRI Journal
    • /
    • v.44 no.5
    • /
    • pp.794-804
    • /
    • 2022
  • Transformer models have had a great impact on natural language processing (NLP) in recent years by realizing outstanding and efficient contextualized language models. Recent studies have used transformer-based language models for various NLP tasks, including Persian named entity recognition (NER). However, in complex tasks, for example, NER, it is difficult to determine which contextualized embedding will produce the best representation for the tasks. Considering the lack of comparative studies to investigate the use of different contextualized pretrained models with sequence modeling classifiers, we conducted a comparative study about using different classifiers and embedding models. In this paper, we use different transformer-based language models tuned with different classifiers, and we evaluate these models on the Persian NER task. We perform a comparative analysis to assess the impact of text representation and text classification methods on Persian NER performance. We train and evaluate the models on three different Persian NER datasets, that is, MoNa, Peyma, and Arman. Experimental results demonstrate that XLM-R with a linear layer and conditional random field (CRF) layer exhibited the best performance. This model achieved phrase-based F-measures of 70.04, 86.37, and 79.25 and word-based F scores of 78, 84.02, and 89.73 on the MoNa, Peyma, and Arman datasets, respectively. These results represent state-of-the-art performance on the Persian NER task.

Development and Evaluation of Information Extraction Module for Postal Address Information (우편주소정보 추출모듈 개발 및 평가)

  • Shin, Hyunkyung;Kim, Hyunseok
    • Journal of Creative Information Culture
    • /
    • v.5 no.2
    • /
    • pp.145-156
    • /
    • 2019
  • In this study, we have developed and evaluated an information extracting module based on the named entity recognition technique. For the given purpose in this paper, the module was designed to apply to the problem dealing with extraction of postal address information from arbitrary documents without any prior knowledge on the document layout. From the perspective of information technique practice, our approach can be said as a probabilistic n-gram (bi- or tri-gram) method which is a generalized technique compared with a uni-gram based keyword matching. It is the main difference between our approach and the conventional methods adopted in natural language processing that applying sentence detection, tokenization, and POS tagging recursively rather than applying the models sequentially. The test results with approximately two thousands documents are presented at this paper.

Using Syntax and Shallow Semantic Analysis for Vietnamese Question Generation

  • Phuoc Tran;Duy Khanh Nguyen;Tram Tran;Bay Vo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.10
    • /
    • pp.2718-2731
    • /
    • 2023
  • This paper presents a method of using syntax and shallow semantic analysis for Vietnamese question generation (QG). Specifically, our proposed technique concentrates on investigating both the syntactic and shallow semantic structure of each sentence. The main goal of our method is to generate questions from a single sentence. These generated questions are known as factoid questions which require short, fact-based answers. In general, syntax-based analysis is one of the most popular approaches within the QG field, but it requires linguistic expert knowledge as well as a deep understanding of syntax rules in the Vietnamese language. It is thus considered a high-cost and inefficient solution due to the requirement of significant human effort to achieve qualified syntax rules. To deal with this problem, we collected the syntax rules in Vietnamese from a Vietnamese language textbook. Moreover, we also used different natural language processing (NLP) techniques to analyze Vietnamese shallow syntax and semantics for the QG task. These techniques include: sentence segmentation, word segmentation, part of speech, chunking, dependency parsing, and named entity recognition. We used human evaluation to assess the credibility of our model, which means we manually generated questions from the corpus, and then compared them with the generated questions. The empirical evidence demonstrates that our proposed technique has significant performance, in which the generated questions are very similar to those which are created by humans.

Token-Based Classification and Dataset Construction for Detecting Modified Profanity (변형된 비속어 탐지를 위한 토큰 기반의 분류 및 데이터셋)

  • Sungmin Ko;Youhyun Shin
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.181-188
    • /
    • 2024
  • Traditional profanity detection methods have limitations in identifying intentionally altered profanities. This paper introduces a new method based on Named Entity Recognition, a subfield of Natural Language Processing. We developed a profanity detection technique using sequence labeling, for which we constructed a dataset by labeling some profanities in Korean malicious comments and conducted experiments. Additionally, to enhance the model's performance, we augmented the dataset by labeling parts of a Korean hate speech dataset using one of the large language models, ChatGPT, and conducted training. During this process, we confirmed that filtering the dataset created by the large language model by humans alone could improve performance. This suggests that human oversight is still necessary in the dataset augmentation process.

Development of a Tourism Information QA Service for the Task-oriented Chatbot Service

  • Hoon-chul Kang;Myeong-Gyun Kang;Jeong-Woo Jwa
    • International Journal of Advanced Culture Technology
    • /
    • v.12 no.3
    • /
    • pp.73-79
    • /
    • 2024
  • The smart tourism chatbot service provide smart tourism services to users easily and conveniently along with the smart tourism app. In this paper, the tourism information QA (Question Answering) service is proposed based on the task-oriented smart tourism chatbot system [13]. The tourism information QA service is an MRC (Machine reading comprehension)-based QA system that finds answers in context and provides them to users. The tourism information QA system consists of NER (Named Entity Recognition), DST (Dialogue State Tracking), Neo4J graph DB, and QA servers. We propose tourism information QA service uses the tourism information NER model and DST model to identify the intent of the user's question and retrieves appropriate context for the answer from the Neo4J tourism knowledgebase. The QA model finds answers from the context and provides them to users through the smart tourism app. We develop the tourism information QA model by transfer learning the bigBird model, which can process the context of 4,096 tokens, using the tourism information QA dataset.

Korean Entity Recognition System using Bi-directional LSTM-CNN-CRF (Bi-directional LSTM-CNN-CRF를 이용한 한국어 개체명 인식 시스템)

  • Lee, Dong-Yub;Lim, Heui-Seok
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.327-329
    • /
    • 2017
  • 개체명 인식(Named Entity Recognition) 시스템은 문서에서 인명(PS), 지명(LC), 단체명(OG)과 같은 개체명을 가지는 단어나 어구를 해당 개체명으로 인식하는 시스템이다. 개체명 인식 시스템을 개발하기 위해 딥러닝 기반의 워드 임베딩(word embedding) 자질과 문장의 형태적 특징 및 기구축 사전(lexicon) 기반의 자질 구성 방법을 제안하고, bi-directional LSTM, CNN, CRF과 같은 모델을 이용하여 구성된 자질을 학습하는 방법을 제안한다. 실험 데이터는 2017 국어 정보시스템 경진대회에서 제공한 2016klpNER 데이터를 이용하였다. 실험은 전체 4258 문장 중 학습 데이터 3406 문장, 검증 데이터 426 문장, 테스트 데이터 426 문장으로 데이터를 나누어 실험을 진행하였다. 실험 결과 본 연구에서 제안하는 모델은 BIO 태깅 방식의 개체 청크 단위 성능 평가 결과 98.9%의 테스트 정확도(test accuracy)와 89.4%의 f1-score를 나타냈다.

  • PDF

Korean Entity Recognition System using Bi-directional LSTM-CNN-CRF (Bi-directional LSTM-CNN-CRF를 이용한 한국어 개체명 인식 시스템)

  • Lee, Dong-Yub;Lim, Heui-Seok
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.327-329
    • /
    • 2017
  • 개체명 인식(Named Entity Recognition) 시스템은 문서에서 인명(PS), 지명(LC), 단체명(OG)과 같은 개체명을 가지는 단어나 어구를 해당 개체명으로 인식하는 시스템이다. 개체명 인식 시스템을 개발하기 위해 딥러닝 기반의 워드 임베딩(word embedding) 자질과 문장의 형태적 특징 및 기구축 사전(lexicon) 기반의 자질 구성 방법을 제안하고, bi-directional LSTM, CNN, CRF과 같은 모델을 이용하여 구성된 자질을 학습하는 방법을 제안한다. 실험 데이터는 2017 국어 정보시스템 경진대회에서 제공한 2016klpNER 데이터를 이용하였다. 실험은 전체 4258 문장 중 학습 데이터 3406 문장, 검증 데이터 426 문장, 테스트 데이터 426 문장으로 데이터를 나누어 실험을 진행하였다. 실험 결과 본 연구에서 제안하는 모델은 BIO 태깅 방식의 개체 청크 단위 성능 평가 결과 98.9%의 테스트 정확도(test accuracy)와 89.4%의 f1-score를 나타냈다.

  • PDF

PharmacoNER Tagger: a deep learning-based tool for automatically finding chemicals and drugs in Spanish medical texts

  • Armengol-Estape, Jordi;Soares, Felipe;Marimon, Montserrat;Krallinger, Martin
    • Genomics & Informatics
    • /
    • v.17 no.2
    • /
    • pp.15.1-15.7
    • /
    • 2019
  • Automatically detecting mentions of pharmaceutical drugs and chemical substances is key for the subsequent extraction of relations of chemicals with other biomedical entities such as genes, proteins, diseases, adverse reactions or symptoms. The identification of drug mentions is also a prior step for complex event types such as drug dosage recognition, duration of medical treatments or drug repurposing. Formally, this task is known as named entity recognition (NER), meaning automatically identifying mentions of predefined entities of interest in running text. In the domain of medical texts, for chemical entity recognition (CER), techniques based on hand-crafted rules and graph-based models can provide adequate performance. In the recent years, the field of natural language processing has mainly pivoted to deep learning and state-of-the-art results for most tasks involving natural language are usually obtained with artificial neural networks. Competitive resources for drug name recognition in English medical texts are already available and heavily used, while for other languages such as Spanish these tools, although clearly needed were missing. In this work, we adapt an existing neural NER system, NeuroNER, to the particular domain of Spanish clinical case texts, and extend the neural network to be able to take into account additional features apart from the plain text. NeuroNER can be considered a competitive baseline system for Spanish drug and CER promoted by the Spanish national plan for the advancement of language technologies (Plan TL).