• Title/Summary/Keyword: Bidirectional encoder

Search Result 55, Processing Time 0.021 seconds

Building Specialized Language Model for National R&D through Knowledge Transfer Based on Further Pre-training (추가 사전학습 기반 지식 전이를 통한 국가 R&D 전문 언어모델 구축)

  • Yu, Eunji;Seo, Sumin;Kim, Namgyu
    • Knowledge Management Research
    • /
    • v.22 no.3
    • /
    • pp.91-106
    • /
    • 2021
  • With the recent rapid development of deep learning technology, the demand for analyzing huge text documents in the national R&D field from various perspectives is rapidly increasing. In particular, interest in the application of a BERT(Bidirectional Encoder Representations from Transformers) language model that has pre-trained a large corpus is growing. However, the terminology used frequently in highly specialized fields such as national R&D are often not sufficiently learned in basic BERT. This is pointed out as a limitation of understanding documents in specialized fields through BERT. Therefore, this study proposes a method to build an R&D KoBERT language model that transfers national R&D field knowledge to basic BERT using further pre-training. In addition, in order to evaluate the performance of the proposed model, we performed classification analysis on about 116,000 R&D reports in the health care and information and communication fields. Experimental results showed that our proposed model showed higher performance in terms of accuracy compared to the pure KoBERT model.

Analysis of trends in deep learning and reinforcement learning

  • Dong-In Choi;Chungsoo Lim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.55-65
    • /
    • 2023
  • In this paper, we apply KeyBERT(Keyword extraction with Bidirectional Encoder Representations of Transformers) algorithm-driven topic extraction and topic frequency analysis to deep learning and reinforcement learning research to discover the rapidly changing trends in them. First, we crawled abstracts of research papers on deep learning and reinforcement learning, and temporally divided them into two groups. After pre-processing the crawled data, we extracted topics using KeyBERT algorithm, and then analyzed the extracted topics in terms of topic occurrence frequency. This analysis reveals that there are distinct trends in research work of all analyzed algorithms and applications, and we can clearly tell which topics are gaining more interest. The analysis also proves the effectiveness of the utilized topic extraction and topic frequency analysis in research trend analysis, and this trend analysis scheme is expected to be used for research trend analysis in other research fields. In addition, the analysis can provide insight into how deep learning will evolve in the near future, and provide guidance for select research topics and methodologies by informing researchers of research topics and methodologies which are recently attracting attention.

Korean Named Entity Recognition using BERT (BERT 를 활용한 한국어 개체명 인식기)

  • Hwang, Seokhyun;Shin, Seokhwan;Choi, Donggeun;Kim, Seonghyun;Kim, Jaieun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.820-822
    • /
    • 2019
  • 개체명이란, 문서에서 특정한 의미를 가지고 있는 단어나 어구를 뜻하는 말로 사람, 기관명, 지역명, 날짜, 시간 등이 있으며 이 개체명을 찾아서 해당하는 의미의 범주를 결정하는 것을 개체명 인식이라고 한다. 본 논문에서는 BERT(Bidirectional Encoder Representations from Transformers) 활용한 한국어 개체명 인식기를 제안한다. 제안하는 모델은 기 학습된 BERT 모델을 활용함으로써 성능을 극대화하여, 최종 F1-Score 는 90.62 를 달성하였고, Bi-LSTM-Attention-CRF 모델에 비해 매우 뛰어난 결과를 보였다.

Simple and effective neural coreference resolution for Korean language

  • Park, Cheoneum;Lim, Joonho;Ryu, Jihee;Kim, Hyunki;Lee, Changki
    • ETRI Journal
    • /
    • v.43 no.6
    • /
    • pp.1038-1048
    • /
    • 2021
  • We propose an end-to-end neural coreference resolution for the Korean language that uses an attention mechanism to point to the same entity. Because Korean is a head-final language, we focused on a method that uses a pointer network based on the head. The key idea is to consider all nouns in the document as candidates based on the head-final characteristics of the Korean language and learn distributions over the referenced entity positions for each noun. Given the recent success of applications using bidirectional encoder representation from transformer (BERT) in natural language-processing tasks, we employed BERT in the proposed model to create word representations based on contextual information. The experimental results indicated that the proposed model achieved state-of-the-art performance in Korean language coreference resolution.

Fine-tuning BERT Models for Keyphrase Extraction in Scientific Articles

  • Lim, Yeonsoo;Seo, Deokjin;Jung, Yuchul
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.1
    • /
    • pp.45-56
    • /
    • 2020
  • Despite extensive research, performance enhancement of keyphrase (KP) extraction remains a challenging problem in modern informatics. Recently, deep learning-based supervised approaches have exhibited state-of-the-art accuracies with respect to this problem, and several of the previously proposed methods utilize Bidirectional Encoder Representations from Transformers (BERT)-based language models. However, few studies have investigated the effective application of BERT-based fine-tuning techniques to the problem of KP extraction. In this paper, we consider the aforementioned problem in the context of scientific articles by investigating the fine-tuning characteristics of two distinct BERT models - BERT (i.e., base BERT model by Google) and SciBERT (i.e., a BERT model trained on scientific text). Three different datasets (WWW, KDD, and Inspec) comprising data obtained from the computer science domain are used to compare the results obtained by fine-tuning BERT and SciBERT in terms of KP extraction.

Korean CSAT Problem Solving with KoBigBird (KoBigBird를 활용한 수능 국어 문제풀이 모델)

  • Park, Nam-Jun;Kim, Jaekwang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.11a
    • /
    • pp.207-210
    • /
    • 2022
  • 최근 자연어 처리 분야에서 기계학습 독해 관련 연구가 활발하게 이루어지고 있다. 그러나 그 중에서 한국어 기계독해 학습을 통해 문제풀이에 적용한 사례를 찾아보기 힘들었다. 기존 연구에서도 수능 영어와 수능 수학 문제를 인공지능(AI) 모델을 활용하여 문제풀이에 적용했던 사례는 있었지만, 수능 국어에 이를 적용하였던 사례는 존재하지 않았다. 또한, 수능 영어와 수능 수학 문제를 AI 문제풀이를 통해 도출한 결괏값이 각각 12점, 16점으로 객관식이라는 수능의 특수성을 고려했을 때 기대에 못 미치는 결과를 나타냈다. 이에 본 논문은 한국어 기계독해 데이터셋을 트랜스포머(Transformer) 기반 모델에 학습하여 수능 국어 문제 풀이에 적용하였다. 이를 위해 객관식으로 이루어진 수능 문항의 각각의 선택지들을 질문 형태로 변형하여 모델이 답을 도출해낼 수 있도록 데이터셋을 변형하였다. 또한 BERT(Bidirectional Encoder Representations from Transformer)가 가진 입력값 개수의 한계를 극복하기 위해 더 큰 입력값을 처리할 수 있는 트랜스포머 기반 모델 중에서 한국어 기계독해 학습에 적합한 KoBigBird를 사전학습모델로 설정하여 성능을 높였다.

  • PDF

Pilot Experiment for Named Entity Recognition of Construction-related Organizations from Unstructured Text Data

  • Baek, Seungwon;Han, Seung H.;Jung, Wooyong;Kim, Yuri
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.847-854
    • /
    • 2022
  • The aim of this study is to develop a Named Entity Recognition (NER) model to automatically identify construction-related organizations from news articles. This study collected news articles using web crawling technique and construction-related organizations were labeled within a total of 1,000 news articles. The Bidirectional Encoder Representations from Transformers (BERT) model was used to recognize clients, constructors, consultants, engineers, and others. As a pilot experiment of this study, the best average F1 score of NER was 0.692. The result of this study is expected to contribute to the establishment of international business strategies by collecting timely information and analyzing it automatically.

  • PDF

Discovering AI-enabled convergences based on BERT and topic network

  • Ji Min Kim;Seo Yeon Lee;Won Sang Lee
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.1022-1034
    • /
    • 2023
  • Various aspects of artificial intelligence (AI) have become of significant interest to academia and industry in recent times. To satisfy these academic and industrial interests, it is necessary to comprehensively investigate trends in AI-related changes of diverse areas. In this study, we identified and predicted emerging convergences with the help of AI-associated research abstracts collected from the SCOPUS database. The bidirectional encoder representations obtained via the transformers-based topic discovery technique were subsequently deployed to identify emerging topics related to AI. The topics discovered concern edge computing, biomedical algorithms, predictive defect maintenance, medical applications, fake news detection with block chain, explainable AI and COVID-19 applications. Their convergences were further analyzed based on the shortest path between topics to predict emerging convergences. Our findings indicated emerging AI convergences towards healthcare, manufacturing, legal applications, and marketing. These findings are expected to have policy implications for facilitating the convergences in diverse industries. Potentially, this study could contribute to the exploitation and adoption of AI-enabled convergences from a practical perspective.

Improving Recognition of Patent's Claims with Deep Neural Networks (딥러닝 기반 특허의 종속 청구항 인식 개선)

  • Park, Ju-yeon;Shin, Yeji;Kim, Minsu;Kim, Dongho;Kim, Jihie
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.05a
    • /
    • pp.500-503
    • /
    • 2020
  • 특허를 통해 기술의 권리를 정의하고 보호하는 일이 매우 중요해짐에 따라 특허 문서를 분석하는 연구 또한 중요해지고 있다. 특히 특허의 청구항을 종속항과 독립항을 구분하고, 관련된 인용을 찾아내는 일은 관련 특허들을 분석하는데 매우 중요하다. 본 연구는 최근 텍스트 분석 분야에 획기적 성능 개선을 이끈 BERT(Bidirectional Encoder Representations From Transformers) 언어 모델을 사용하고 Neural Network 의 파인 튜닝 과정을 통해 청구항의 독립과 종속을 구분하였고, 인용하는 항의 번호와 인용 문구로 이루어진 인용 패턴을 통해 종속항의 인용 항을 찾아내었다. 이 방법을 2003 년 이후의 xml 형식의 미국 특허 데이터에 사용한 결과, 정확도 99% 의 성능을 확보하였다.

Korean Dependency Parsing Using Various Ensemble Models (다양한 앙상블 알고리즘을 이용한 한국어 의존 구문 분석)

  • Jo, Gyeong-Cheol;Kim, Ju-Wan;Kim, Gyun-Yeop;Park, Seong-Jin;Gang, Sang-U
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.543-545
    • /
    • 2019
  • 본 논문은 최신 한국어 의존 구문 분석 모델(Korean dependency parsing model)들과 다양한 앙상블 모델(ensemble model)들을 결합하여 그 성능을 분석한다. 단어 표현은 미리 학습된 워드 임베딩 모델(word embedding model)과 ELMo(Embedding from Language Model), Bert(Bidirectional Encoder Representations from Transformer) 그리고 다양한 추가 자질들을 사용한다. 또한 사용된 의존 구문 분석 모델로는 Stack Pointer Network Model, Deep Biaffine Attention Parser와 Left to Right Pointer Parser를 이용한다. 최종적으로 각 모델의 분석 결과를 앙상블 모델인 Bagging 기법과 XGBoost(Extreme Gradient Boosting) 이용하여 최적의 모델을 제안한다.

  • PDF