• Title/Summary/Keyword: BERT

Search Result 396, Processing Time 0.024 seconds

Korean Head-Tail Tokenization and Part-of-Speech Tagging by using Deep Learning (딥러닝을 이용한 한국어 Head-Tail 토큰화 기법과 품사 태깅)

  • Kim, Jungmin;Kang, Seungshik;Kim, Hyeokman
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.4
    • /
    • pp.199-208
    • /
    • 2022
  • Korean is an agglutinative language, and one or more morphemes are combined to form a single word. Part-of-speech tagging method separates each morpheme from a word and attaches a part-of-speech tag. In this study, we propose a new Korean part-of-speech tagging method based on the Head-Tail tokenization technique that divides a word into a lexical morpheme part and a grammatical morpheme part without decomposing compound words. In this method, the Head-Tail is divided by the syllable boundary without restoring irregular deformation or abbreviated syllables. Korean part-of-speech tagger was implemented using the Head-Tail tokenization and deep learning technique. In order to solve the problem that a large number of complex tags are generated due to the segmented tags and the tagging accuracy is low, we reduced the number of tags to a complex tag composed of large classification tags, and as a result, we improved the tagging accuracy. The performance of the Head-Tail part-of-speech tagger was experimented by using BERT, syllable bigram, and subword bigram embedding, and both syllable bigram and subword bigram embedding showed improvement in performance compared to general BERT. Part-of-speech tagging was performed by integrating the Head-Tail tokenization model and the simplified part-of-speech tagging model, achieving 98.99% word unit accuracy and 99.08% token unit accuracy. As a result of the experiment, it was found that the performance of part-of-speech tagging improved when the maximum token length was limited to twice the number of words.

Implementation of Git's Commit Message Complex Classification Model for Software Maintenance

  • Choi, Ji-Hoon;Kim, Joon-Yong;Park, Seong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.11
    • /
    • pp.131-138
    • /
    • 2022
  • Git's commit message is closely related to the project life cycle, and by this characteristic, it can greatly contribute to cost reduction and improvement of work efficiency by identifying risk factors and project status of project operation activities. Among these related fields, there are many studies that classify commit messages as types of software maintenance, and the maximum accuracy among the studies is 87%. In this paper, the purpose of using a solution using the commit classification model is to design and implement a complex classification model that combines several models to increase the accuracy of the previously published models and increase the reliability of the model. In this paper, a dataset was constructed by extracting automated labeling and source changes and trained using the DistillBERT model. As a result of verification, reliability was secured by obtaining an F1 score of 95%, which is 8% higher than the maximum of 87% reported in previous studies. Using the results of this study, it is expected that the reliability of the model will be increased and it will be possible to apply it to solutions such as software and project management.

Analysis of interest in non-face-to-face medical counseling of modern people in the medical industry (의료 산업에 있어 현대인의 비대면 의학 상담에 대한 관심도 분석 기법)

  • Kang, Yooseong;Park, Jong Hoon;Oh, Hayoung;Lee, Se Uk
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.11
    • /
    • pp.1571-1576
    • /
    • 2022
  • This study aims to analyze the interest of modern people in non-face-to-face medical counseling in the medical industrys. Big data was collected on two social platforms, 지식인, a platform that allows experts to receive medical counseling, and YouTube. In addition to the top five keywords of telephone counseling, "internal medicine", "general medicine", "department of neurology", "department of mental health", and "pediatrics", a data set was built from each platform with a total of eight search terms: "specialist", "medical counseling", and "health information". Afterwards, pre-processing processes such as morpheme classification, disease extraction, and normalization were performed based on the crawled data. Data was visualized with word clouds, broken line graphs, quarterly graphs, and bar graphs by disease frequency based on word frequency. An emotional classification model was constructed only for YouTube data, and the performance of GRU and BERT-based models was compared.

A Named Entity Recognition Platform Based on Semi-Automatically Built NE-annotated Corpora and KoBERT (반자동구축된 개체명 주석코퍼스 DecoNAC과 KoBERT를 이용한 개체명인식 플랫폼 DecoNERO)

  • Kim, Shin-Woo;Hwang, Chang-Hoe;Yoon, Jeong-Woo;Lee, Seong-Hyeon;Choi, Soo-Won;Nam, Jee-Sun
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.304-309
    • /
    • 2020
  • 본 연구에서는 한국어 전자사전 DECO(Dictionnaire Electronique du COreen)와 다단어(Multi-Word Expressions: MWE) 개체명을 부분 패턴으로 기술하는 부분문법그래프(Local-Grammar Graph: LGG) 프레임에 기반하여 반자동으로 개체명주석 코퍼스 DecoNAC을 구축한 후, 이를 개체명 분석에 활용하고 또한 기계학습에 필요한 도메인별 학습 데이터로 활용하는 DecoNERO 개체명인식 플랫폼을 소개하는 데에 목적을 두었다. 최근 들어 좋은 성과를 보이는 것으로 보고되고 있는 기계학습 방법론들은 다양한 도메인을 기반으로한 대규모의 학습데이터를 필요로 한다. 본 연구에서는 정교하게 설계된 개체명 사전과 다단어 개체명 시퀀스에 대한 언어자원을 바탕으로 하는 반자동으로 학습데이터를 생성하는 방법론을 제안하였다. 본 연구에서 제안된 개체명주석 코퍼스 DecoNAC 기반 접근법의 성능을 실험하기 위해 온라인 뉴스 기사 텍스트를 바탕으로 실험을 진행하였다. 이 실험에서 DecoNAC을 적용한 경우, KoBERT 모델만으로 개체명을 인식한 결과에 비해 약 7.49%의 성능향상을 기대할 수 있음을 확인하였다.

  • PDF

Evaluating Korean Machine Reading Comprehension Generalization Performance using Cross and Blind Dataset Assessment (기계독해 데이터셋의 교차 평가 및 블라인드 평가를 통한 한국어 기계독해의 일반화 성능 평가)

  • Lim, Joon-Ho;Kim, Hyunki
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.213-218
    • /
    • 2019
  • 기계독해는 자연어로 표현된 질문과 단락이 주어졌을 때, 해당 단락 내에 표현된 정답을 찾는 태스크이다. 최근 기계독해 태스크도 다른 자연어처리 태스크와 유사하게 BERT, XLNet, RoBERTa와 같이 사전에 학습한 언어모델을 이용하고 질문과 단락이 입력되었을 경우 정답의 경계를 추가 학습(fine-tuning)하는 방법이 우수한 성능을 보이고 있으며, 특히 KorQuAD v1.0 데이터셋에서 학습 및 평가하였을 경우 94% F1 이상의 높은 성능을 보이고 있다. 본 논문에서는 현재 최고 수준의 기계독해 기술이 학습셋과 유사한 평가셋이 아닌 일반적인 질문과 단락 쌍에 대해서 가지는 일반화 능력을 평가하고자 한다. 이를 위하여 첫번째로 한국어에 대해서 공개된 KorQuAD v1.0 데이터셋과 NIA v2017 데이터셋, 그리고 엑소브레인 과제에서 구축한 엑소브레인 v2018 데이터셋을 이용하여 데이터셋 간의 교차 평가를 수행하였다. 교차 평가결과, 각 데이터셋의 정답의 길이, 질문과 단락 사이의 오버랩 비율과 같은 데이터셋 통계와 일반화 성능이 서로 관련이 있음을 확인하였다. 다음으로 KorBERT 사전 학습 언어모델과 학습 가능한 기계독해 데이터 셋 21만 건 전체를 이용하여 학습한 기계독해 모델에 대해 블라인드 평가셋 평가를 수행하였다. 블라인드 평가로 일반분야에서 학습한 기계독해 모델의 법률분야 평가셋에서의 일반화 성능을 평가하고, 정답 단락을 읽고 질문을 생성하지 않고 질문을 먼저 생성한 후 정답 단락을 검색한 평가셋에서의 기계독해 성능을 평가하였다. 블라인드 평가 결과, 사전 학습 언어 모델을 사용하지 않은 기계독해 모델 대비 사전 학습 언어 모델을 사용하는 모델이 큰 폭의 일반화 성능을 보였으나, 정답의 길이가 길고 질문과 단락 사이 어휘 오버랩 비율이 낮은 평가셋에서는 아직 80%이하의 성능을 보임을 확인하였다. 본 논문의 실험 결과 기계 독해 태스크는 특성 상 질문과 정답 사이의 어휘 오버랩 및 정답의 길이에 따라 난이도 및 일반화 성능 차이가 발생함을 확인하였고, 일반적인 질문과 단락을 대상으로 하는 기계독해 모델 개발을 위해서는 다양한 유형의 평가셋에서 일반화 평가가 필요함을 확인하였다.

  • PDF

Improving Accuracy of Noise Review Filtering for Places with Insufficient Training Data

  • Hyeon Gyu Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.7
    • /
    • pp.19-27
    • /
    • 2023
  • In the process of collecting social reviews, a number of noise reviews irrelevant to a given search keyword can be included in the search results. To filter out such reviews, machine learning can be used. However, if the number of reviews is insufficient for a target place to be analyzed, filtering accuracy can be degraded due to the lack of training data. To resolve this issue, we propose a supervised learning method to improve accuracy of the noise review filtering for the places with insufficient reviews. In the proposed method, training is not performed by an individual place, but by a group including several places with similar characteristics. The classifier obtained through the training can be used for the noise review filtering of an arbitrary place belonging to the group, so the problem of insufficient training data can be resolved. To verify the proposed method, a noise review filtering model was implemented using LSTM and BERT, and filtering accuracy was checked through experiments using real data collected online. The experimental results show that the accuracy of the proposed method was 92.4% on the average, and it provided 87.5% accuracy when targeting places with less than 100 reviews.

Design and implementation of trend analysis system through deep learning transfer learning (딥러닝 전이학습을 이용한 경량 트렌드 분석 시스템 설계 및 구현)

  • Shin, Jongho;An, Suvin;Park, Taeyoung;Bang, Seungcheol;Noh, Giseop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.87-89
    • /
    • 2022
  • Recently, as more consumers spend more time at home due to COVID-19, the time spent on digital consumption such as SNS and OTT, which can be easily used non-face-to-face, naturally increased. Since 2019, when COVID-19 occurred, digital consumption has doubled from 44% to 82%, and it is important to quickly and accurately grasp and apply trends by analyzing consumers' emotions due to the rapidly changing digital characteristics. However, there are limitations in actually implementing services using emotional analysis in small systems rather than large-scale systems, and there are not many cases where they are actually serviced. However, if even a small system can easily analyze consumer trends, it will help the rapidly changing modern society. In this paper, we propose a lightweight trend analysis system that builds a learning network through Transfer Learning (Fine Tuning) of the BERT Model and interlocks Crawler for real-time data collection.

  • PDF

A Study on Auto-Classification of Aviation Safety Data using NLP Algorithm (자연어처리 알고리즘을 이용한 위험기반 항공안전데이터 자동분류 방안 연구)

  • Sung-Hoon Yang;Young Choi;So-young Jung;Joo-hyun Ahn
    • Journal of Advanced Navigation Technology
    • /
    • v.26 no.6
    • /
    • pp.528-535
    • /
    • 2022
  • Although the domestic aviation industry has made rapid progress with the development of aircraft manufacturing and transportation technologies, aviation safety accidents continue to occur. The supervisory agency classifies hazards and risks based on risk-based aviation safety data, identifies safety trends for each air transportation operator, and conducts pre-inspections to prevent event and accidents. However, the human classification of data described in natural language format results in different results depending on knowledge, experience, and propensity, and it takes a considerable amount of time to understand and classify the meaning of the content. Therefore, in this journal, the fine-tuned KoBERT model was machine-learned over 5,000 data to predict the classification value of new data, showing 79.2% accuracy. In addition, some of the same result prediction and failed data for similar events were errors caused by human.

A Study on Book Recovery Method Depending on Book Damage Levels Using Book Scan (북스캔을 이용한 도서 손상 단계에 따른 딥 러닝 기반 도서 복구 방법에 관한 연구)

  • Kyungho Seok;Johui Lee;Byeongchan Park;Seok-Yoon Kim;Youngmo Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.4
    • /
    • pp.154-160
    • /
    • 2023
  • Recently, with the activation of eBook services, books are being published simultaneously as physical books and digitized eBooks. Paper books are more expensive than e-books due to printing and distribution costs, so demand for relatively inexpensive e-books is increasing. There are cases where previously published physical books cannot be digitized due to the circumstances of the publisher or author, so there is a movement among individual users to digitize books that have been published for a long time. However, existing research has only studied the advancement of the pre-processing process that can improve text recognition before applying OCR technology, and there are limitations to digitization depending on the condition of the book. Therefore, support for book digitization services depending on the condition of the physical book is needed. need. In this paper, we propose a method to support digitalization services according to the status of physical books held by book owners. Create images by scanning books and extract text information from the images through OCR. We propose a method to recover text that cannot be extracted depending on the state of the book using BERT, a natural language processing deep learning model. As a result, it was confirmed that the recovery method using BERT is superior when compared to RNN, which is widely used in recommendation technology.

  • PDF

Efficient Emotion Classification Method Based on Multimodal Approach Using Limited Speech and Text Data (적은 양의 음성 및 텍스트 데이터를 활용한 멀티 모달 기반의 효율적인 감정 분류 기법)

  • Mirr Shin;Youhyun Shin
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.174-180
    • /
    • 2024
  • In this paper, we explore an emotion classification method through multimodal learning utilizing wav2vec 2.0 and KcELECTRA models. It is known that multimodal learning, which leverages both speech and text data, can significantly enhance emotion classification performance compared to methods that solely rely on speech data. Our study conducts a comparative analysis of BERT and its derivative models, known for their superior performance in the field of natural language processing, to select the optimal model for effective feature extraction from text data for use as the text processing model. The results confirm that the KcELECTRA model exhibits outstanding performance in emotion classification tasks. Furthermore, experiments using datasets made available by AI-Hub demonstrate that the inclusion of text data enables achieving superior performance with less data than when using speech data alone. The experiments show that the use of the KcELECTRA model achieved the highest accuracy of 96.57%. This indicates that multimodal learning can offer meaningful performance improvements in complex natural language processing tasks such as emotion classification.