• Title/Summary/Keyword: BERT

Search Result 396, Processing Time 0.026 seconds

Emerging Topic Detection Using Text Embedding and Anomaly Pattern Detection in Text Streaming Data (텍스트 스트리밍 데이터에서 텍스트 임베딩과 이상 패턴 탐지를 이용한 신규 주제 발생 탐지)

  • Choi, Semok;Park, Cheong Hee
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.9
    • /
    • pp.1181-1190
    • /
    • 2020
  • Detection of an anomaly pattern deviating normal data distribution in streaming data is an important technique in many application areas. In this paper, a method for detection of an newly emerging pattern in text streaming data which is an ordered sequence of texts is proposed based on text embedding and anomaly pattern detection. Using text embedding methods such as BOW(Bag Of Words), Word2Vec, and BERT, the detection performance of the proposed method is compared. Experimental results show that anomaly pattern detection using BERT embedding gave an average F1 value of 0.85 and the F1 value of 1 in three cases among five test cases.

Simple and effective neural coreference resolution for Korean language

  • Park, Cheoneum;Lim, Joonho;Ryu, Jihee;Kim, Hyunki;Lee, Changki
    • ETRI Journal
    • /
    • v.43 no.6
    • /
    • pp.1038-1048
    • /
    • 2021
  • We propose an end-to-end neural coreference resolution for the Korean language that uses an attention mechanism to point to the same entity. Because Korean is a head-final language, we focused on a method that uses a pointer network based on the head. The key idea is to consider all nouns in the document as candidates based on the head-final characteristics of the Korean language and learn distributions over the referenced entity positions for each noun. Given the recent success of applications using bidirectional encoder representation from transformer (BERT) in natural language-processing tasks, we employed BERT in the proposed model to create word representations based on contextual information. The experimental results indicated that the proposed model achieved state-of-the-art performance in Korean language coreference resolution.

A BERT-Based Automatic Scoring Model of Korean Language Learners' Essay

  • Lee, Jung Hee;Park, Ji Su;Shon, Jin Gon
    • Journal of Information Processing Systems
    • /
    • v.18 no.2
    • /
    • pp.282-291
    • /
    • 2022
  • This research applies a pre-trained bidirectional encoder representations from transformers (BERT) handwriting recognition model to predict foreign Korean-language learners' writing scores. A corpus of 586 answers to midterm and final exams written by foreign learners at the Intermediate 1 level was acquired and used for pre-training, resulting in consistent performance, even with small datasets. The test data were pre-processed and fine-tuned, and the results were calculated in the form of a score prediction. The difference between the prediction and actual score was then calculated. An accuracy of 95.8% was demonstrated, indicating that the prediction results were strong overall; hence, the tool is suitable for the automatic scoring of Korean written test answers, including grammatical errors, written by foreigners. These results are particularly meaningful in that the data included written language text produced by foreign learners, not native speakers.

A Study on the Performance Analysis of Entity Name Recognition Techniques Using Korean Patent Literature

  • Gim, Jangwon
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.139-151
    • /
    • 2020
  • Entity name recognition is a part of information extraction that extracts entity names from documents and classifies the types of extracted entity names. Entity name recognition technologies are widely used in natural language processing, such as information retrieval, machine translation, and query response systems. Various deep learning-based models exist to improve entity name recognition performance, but studies that compared and analyzed these models on Korean data are insufficient. In this paper, we compare and analyze the performance of CRF, LSTM-CRF, BiLSTM-CRF, and BERT, which are actively used to identify entity names using Korean data. Also, we compare and evaluate whether embedding models, which are variously used in recent natural language processing tasks, can affect the entity name recognition model's performance improvement. As a result of experiments on patent data and Korean corpus, it was confirmed that the BiLSTM-CRF using FastText method showed the highest performance.

Process for Automatic Requirement Generation in Korean Requirements Documents using NLP Machine Learning (NLP 기계 학습을 사용한 한글 요구사항 문서에서의 요구사항 자동 생성 프로세스)

  • Young Yun Baek;Soo Jin Park;Young Bum Park
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.1
    • /
    • pp.88-93
    • /
    • 2023
  • In software engineering, requirement analysis is an important task throughout the process and takes up a high proportion. However, factors that fail to analyze requirements include communication failure, different understanding of the meaning of requirements, and failure to perform requirements normally. To solve this problem, we derived actors and behaviors using morpheme analysis and BERT algorithms in the Korean requirement document and constructed them as ontologies. A chatbot system with ontology data is constructed to derive a final system event list through Q&A with users. The chatbot system generates the derived system event list as a requirement diagram and a requirement specification and provides it to the user. Through the above system, diagrams and specifications with a level of coverage complied with Korean requirement documents were created.

  • PDF

Machine Reading Comprehension-based Q&A System in Educational Environment (교육환경에서의 기계독해 기반 질의응답 시스템)

  • Jun-Ha Ju;Sang-Hyun Park;Seung-Wan Nam;Kyung-Tae Lim
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.541-544
    • /
    • 2022
  • 코로나19 이후로 교육의 형태가 오프라인에서 온라인으로 변화되었다. 하지만 온라인 강의 교육 서비스는 실시간 소통의 한계를 가지고 있다. 이러한 단점을 해결하기 위해 본 논문에서는 기계독해 기반 실시간 강의 질의응답 시스템을 제안한다. 본 논문연구에서는 질의응답 시스템을 만들기 위해 KorQuAD 1.0 학습 데이터를 활용해 BERT를 fine-tuning 했고 그 결과를 이용해 기계독해 기반 질의응답 시스템을 구축했다. 하지만 이렇게 구축된 챗봇은 강의 내용에 대한 질의응답에 최적화되어있지 않기 때문에 강의 내용 질의응답에 관한 문장형 데이터 셋을 구축하고 추가 학습을 수행하여 문제를 해결했다. 실험 결과 질의응답 표를 통해 문장형 답변에 대한 성능이 개선된 것을 확인할 수 있다.

  • PDF

Zero-Shot Readability Assessment of Korean ESG Reports using BERT (BERT를 활용한 한국어 지속가능경영 보고서의 제로샷 가독성 평가)

  • Son, Guijin;Yoon, Naeun;Lee, Kaeun
    • Annual Conference of KIPS
    • /
    • 2022.05a
    • /
    • pp.456-459
    • /
    • 2022
  • 본 연구는 최근 자연어 인공지능 연구 동향에 발맞추어 사전 학습된 언어 인공지능을 활용한 의미론적 분석을 통해 국문 보고서의 가독성을 평가하는 방법론 두 가지를 제안한다. 연구진은 연구 과정에서 사전 학습된 언어 인공지능을 활용해 추가 학습 없이 문장을 임의의 벡터값으로 임베딩하고 이를 통해 1. 의미론적 복잡도 와 2. 내재적 감정 변동성 두 가지 지표를 추출한다. 나아가, 앞서 발견한 두 지표가 국문 보고서의 가독성과 정(+)의 상관관계에 있음을 확인하였다. 본 연구는 통사론적 분석과 레이블링 된 데이터에 크게 의존하던 기존의 가독성 평가 방법론으로 부터 탈피해, 별도의 학습 없이 기존 가독성 지표에 근사한다는 점에서 의미가 있다.

Sentimental Analysis of YouTube Korean Subscripts Using KoBERT (KoBERT기반 Youtube 자막 감정 분석 연구)

  • Choi, Da-Eun;Kim, Hyo-Min;Lee, Hae-Rin;Hwang, Yu-Rim
    • Annual Conference of KIPS
    • /
    • 2022.05a
    • /
    • pp.513-516
    • /
    • 2022
  • YouTube 이용자의 급증으로 많은 사람이 유튜브 알고리즘에 의해 무분별한 영상에 노출되고 있다. 이는 YouTube 이용자에게 부정적인 영향을 미칠 수 있으며 더 나아가 사회적으로 미성숙한 미디어 문화를 조장할 수 있다. 본 논문에서는 YouTube 컨텐츠에 대한 감정분석 연구를 처음으로 시도한다. 구체적으로, YouTube 컨텐츠 자막에 대해 기존의 자연어 처리 기반 감정분석 기법을 적용하여 성능을 분석한다.

Discovering AI-enabled convergences based on BERT and topic network

  • Ji Min Kim;Seo Yeon Lee;Won Sang Lee
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.1022-1034
    • /
    • 2023
  • Various aspects of artificial intelligence (AI) have become of significant interest to academia and industry in recent times. To satisfy these academic and industrial interests, it is necessary to comprehensively investigate trends in AI-related changes of diverse areas. In this study, we identified and predicted emerging convergences with the help of AI-associated research abstracts collected from the SCOPUS database. The bidirectional encoder representations obtained via the transformers-based topic discovery technique were subsequently deployed to identify emerging topics related to AI. The topics discovered concern edge computing, biomedical algorithms, predictive defect maintenance, medical applications, fake news detection with block chain, explainable AI and COVID-19 applications. Their convergences were further analyzed based on the shortest path between topics to predict emerging convergences. Our findings indicated emerging AI convergences towards healthcare, manufacturing, legal applications, and marketing. These findings are expected to have policy implications for facilitating the convergences in diverse industries. Potentially, this study could contribute to the exploitation and adoption of AI-enabled convergences from a practical perspective.