• Title/Summary/Keyword: Recognition of Named Entity

Search Result 132, Processing Time 0.023 seconds

Korean Named Entity Recognition using BERT (BERT 를 활용한 한국어 개체명 인식기)

  • Hwang, Seokhyun;Shin, Seokhwan;Choi, Donggeun;Kim, Seonghyun;Kim, Jaieun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.820-822
    • /
    • 2019
  • 개체명이란, 문서에서 특정한 의미를 가지고 있는 단어나 어구를 뜻하는 말로 사람, 기관명, 지역명, 날짜, 시간 등이 있으며 이 개체명을 찾아서 해당하는 의미의 범주를 결정하는 것을 개체명 인식이라고 한다. 본 논문에서는 BERT(Bidirectional Encoder Representations from Transformers) 활용한 한국어 개체명 인식기를 제안한다. 제안하는 모델은 기 학습된 BERT 모델을 활용함으로써 성능을 극대화하여, 최종 F1-Score 는 90.62 를 달성하였고, Bi-LSTM-Attention-CRF 모델에 비해 매우 뛰어난 결과를 보였다.

Korean Named Entity Recognition using D-Tag (D-Tag를 이용한 한국어 개체명 인식)

  • Eunsu Kim;Sujong Do;Cheoneum Park
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.35-40
    • /
    • 2022
  • 본 논문에서는 시퀀스 레이블링 문제(sequence labeling problem)인 개체명 인식에 사용할 새로운 태깅 포맷인 Delimiter tag (D-tag)를 소개한다. 시퀀스 레이블링 문제에서 사용하는 BIO-tag 포맷은 개체명 레이블을 B (beginning)와 I (inside) 의미의 레이블로 확장하여 타겟 클래스의 수가 2배 증가한다. 또한 BIO-tag 포맷을 사용할 경우, 모델이 B와 I 를 잘못 분류하는 문제가 발생하며, 레이블 수가 많은 세부 분류 개체명의 경우에는 label confusion을 야기한다. 본 논문에서 제안한 D-tag 포맷은 타겟 클래스의 수를 증가시키지 않기 때문에 앞서 언급한 문제를 해결할 수 있다. 실험 결과, D-tag를 사용하여 학습한 모델이 BIO-tag를 사용한 경우보다 더 좋은 성능을 보여, 유망함을 확인하였다.

  • PDF

A Method for Extracting Equipment Specifications from Plant Documents and Cross-Validation Approach with Similar Equipment Specifications (플랜트 설비 문서로부터 설비사양 추출 및 유사설비 사양 교차 검증 접근법)

  • Jae Hyun Lee;Seungeon Choi;Hyo Won Suh
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.2
    • /
    • pp.55-68
    • /
    • 2024
  • Plant engineering companies create or refer to requirements documents for each related field, such as plant process/equipment/piping/instrumentation, in different engineering departments. The process-related requirements document includes not only a description of the process but also the requirements of the equipment or related facilities that will operate it. Since the authors and reviewers of the requirements documents are different, there is a possibility that inconsistencies may occur between equipment or parts design specifications described in different requirement documents. Ensuring consistency in these matters can increase the reliability of the overall plant design information. However, the amount of documents and the scattered nature of requirements for a same equipment and parts across different documents make it challenging for engineers to trace and manage requirements. This paper proposes a method to analyze requirement sentences and calculate the similarity of requirement sentences in order to identify semantically identical sentences. To calculate the similarity of requirement sentences, we propose a named entity recognition method to identify compound words for the parts and properties that are semantically central to the requirements. A method to calculate the similarity of the identified compound words for parts and properties is also proposed. The proposed method is explained using sentences in practical documents, and experimental results are described.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Considerations for Applying Korean Natural Language Processing Technology in Records Management (기록관리 분야에서 한국어 자연어 처리 기술을 적용하기 위한 고려사항)

  • Haklae, Kim
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.22 no.4
    • /
    • pp.129-149
    • /
    • 2022
  • Records have temporal characteristics, including the past and present; linguistic characteristics not limited to a specific language; and various types categorized in a complex way. Processing records such as text, video, and audio in the life cycle of records' creation, preservation, and utilization entails exhaustive effort and cost. Primary natural language processing (NLP) technologies, such as machine translation, document summarization, named-entity recognition, and image recognition, can be widely applied to electronic records and analog digitization. In particular, Korean deep learning-based NLP technologies effectively recognize various record types and generate record management metadata. This paper provides an overview of Korean NLP technologies and discusses considerations for applying NLP technology in records management. The process of using NLP technologies, such as machine translation and optical character recognition for digital conversion of records, is introduced as an example implemented in the Python environment. In contrast, a plan to improve environmental factors and record digitization guidelines for applying NLP technology in the records management field is proposed for utilizing NLP technology.

Natural language processing techniques for bioinformatics

  • Tsujii, Jun-ichi
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2003.10a
    • /
    • pp.3-3
    • /
    • 2003
  • With biomedical literature expanding so rapidly, there is an urgent need to discover and organize knowledge extracted from texts. Although factual databases contain crucial information the overwhelming amount of new knowledge remains in textual form (e.g. MEDLINE). In addition, new terms are constantly coined as the relationships linking new genes, drugs, proteins etc. As the size of biomedical literature is expanding, more systems are applying a variety of methods to automate the process of knowledge acquisition and management. In my talk, I focus on the project, GENIA, of our group at the University of Tokyo, the objective of which is to construct an information extraction system of protein - protein interaction from abstracts of MEDLINE. The talk includes (1) Techniques we use fDr named entity recognition (1-a) SOHMM (Self-organized HMM) (1-b) Maximum Entropy Model (1-c) Lexicon-based Recognizer (2) Treatment of term variants and acronym finders (3) Event extraction using a full parser (4) Linguistic resources for text mining (GENIA corpus) (4-a) Semantic Tags (4-b) Structural Annotations (4-c) Co-reference tags (4-d) GENIA ontology I will also talk about possible extension of our work that links the findings of molecular biology with clinical findings, and claim that textual based or conceptual based biology would be a viable alternative to system biology that tends to emphasize the role of simulation models in bioinformatics.

  • PDF

Ontology Knowledge based Information Retrieval for User Query Interpretation (사용자 질의 의미 해석을 위한 온톨로지 지식 기반 검색)

  • Kim, Nanju;Pyo, Hyejin;Jeong, Hoon;Choi, Euiin
    • Journal of Digital Convergence
    • /
    • v.12 no.6
    • /
    • pp.245-252
    • /
    • 2014
  • Semantic search promises to provide more accurate result than present-day keyword matching-based search by using the knowledge base represented logically. But, the ordinary users don't know well the complex formal query language and schema of the knowledge base. So, the system should interpret the meaning of user's keywords. In this paper, we describe a user query interpretation system for the semantic retrieval of multimedia contents. Our system is ontological knowledge base-driven in the sense that the interpretation process is integrated into a unified structure around a knowledge base, which is built on domain ontologies.

Personal Smart Travel Planner Service

  • Ki-Beom Kang;Myeong Gyun Kang;Seong-Hyuk Jo;Jeong-Woo Jwa
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.4
    • /
    • pp.385-392
    • /
    • 2023
  • The smart tourism service provides tourists with personal travel planner services and context-awareness-based tour guide services. In this paper, we propose the personal travel planner service that creates my travel itinerary using the smart tourism app and the travel planner system. The smart tourism app provides recommended travel products and POI tourist information used to create my travel itinerary. The smart tourism app also provides the smart tourism chatbot service that allows users to select POI tourist information easily and conveniently. The travel planner system consists of the smart tourism information system and the smart tourism chatbot system. The smart tourism information system provides users with travel planner services, recommended travel products, and POI tourism information through the smart tourism app. The smart tourism chatbot system consists of named entity recognition (NER), dialogue state tracking (DST), and Neo4J servers, and provides chatbot services as a smart tourism app. Users can create their own travel itinerary, modify the travel itinerary while traveling, and then register it as a recommended travel product to users, including acquaintances.

Improving Quality of Training Corpus for Named Entity Recognition Using Heuristic Rules (휴리스틱을 이용한 개체명 인식 학습 말뭉치 품질 향상)

  • Lee, Seong-Hee;Song, Yeong-Kil;Kim, Hark-Soo
    • Annual Conference on Human and Language Technology
    • /
    • 2015.10a
    • /
    • pp.202-205
    • /
    • 2015
  • 개체명 인식은 문서에서 개체명을 추출하고 추출된 개체명의 범주를 결정하는 작업이다. 기존의 지도 학습 기법을 이용한 개체명 인식을 위해서는 개체명 범주가 수동으로 부착된 대용량의 학습 말뭉치가 필요하며, 대용량의 말뭉치 구축은 인력과 시간이 많이 들어가는 일이다. 본 논문에서는 학습 말뭉치 구축비용을 최소화하고 초기 학습 말뭉치의 노이즈를 제거하여 말뭉치의 품질을 향상시키는 방법을 제안한다. 제안 방법은 반자동 개체명 사전 구축 방법으로 구축한 개체명 사전과 원거리 감독법을 사용하여 초기 개체명 범주 부착 말뭉치를 구축한다. 그리고 휴리스틱을 이용하여 초기 말뭉치의 노이즈를 제거하여 학습 말뭉치의 품질을 향상시키고 개체명 인식의 성능을 향상시킨다. 실험 결과 휴리스틱 적용을 통해 개체명 인식의 F1-점수를 67.36%에서 73.17%로 향상시켰다.

  • PDF

An Analysis of Named Entity Recognition System using MLM-based Language Transfer Learning (MLM 기반 언어 간 전이학습을 이용한 개체명 인식 방법론 분석)

  • Junyoung Son;Gyeongmin Kim;Jinsung Kim;Yuna Hur;Heuiseok Lim
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.284-288
    • /
    • 2022
  • 최근 다양한 언어모델의 구축 및 발전으로 개체명 인식 시스템의 성능은 최고 수준에 도달했다. 하지만 이와 관련된 대부분의 연구는 데이터가 충분한 언어에 대해서만 다루기 때문에, 양질의 지도학습 데이터의 존재를 가정한다. 대부분의 언어에서는 개체 유형에 대한 언어의 잠재적 특성을 충분히 학습할 수 있는 지도학습 데이터가 부족하기 때문에, 종종 자원 부족의 어려움에 직면한다. 본 논문에서는 Masked language modeling 기반 언어 간 전이학습을 이용한 개체명 인식 방법론에 대한 분석을 수행한다. 이를 위해 전이를 수행하는 소스 언어는 고자원 언어로 가정하며, 전이를 받는 타겟 언어는 저자원 언어로 가정한다. 본 논문에서는 언어모델의 토큰 사전에 언어 독립적인 가상의 자질인 개체 유형에 대한 프롬프트 토큰을 추가하고 이를 소스 언어로 학습한 뒤, 타겟 언어로 전이하는 상황에서 제안하는 방법론에 대한 평가를 수행한다. 실험 결과, 제안하는 방법론은 일반적인 미세조정 방법론보다 높은 성능을 보였으며, 한국어에서 가장 큰 영향을 받은 타겟 언어는 네덜란드어, 한국어로 전이할 때 가장 큰 영향을 준 소스 언어는 중국어인 결과를 보였다.

  • PDF