• Title/Summary/Keyword: BERT

Search Result 395, Processing Time 0.023 seconds

Phases of Alienation in Le Torrent by Anne Hébert (안느 에베르의 중·단편집 『격류』에 드러나는 소외의 시대상)

  • Kang, Choung-Kwon
    • Cross-Cultural Studies
    • /
    • v.39
    • /
    • pp.7-32
    • /
    • 2015
  • In 1950, Anne $H{\acute{e}}bert$ published Le Torrent, a collection of seven short stories. These stories containing many shocking themes and expressions have placed her one of the pioneers of modern novels in Quebec. This paper tries to analyze several phases of alienation described in the novels and the reactions of alienated caracters in their situation. Some examples of alienated and mentally or physically deformed characters in Le Torrent are Fran?ois, $St{\acute{e}}phanie$, Stella, etc. Although the author wanted readers to interpret these characters on ther individual level, this paper interprets them differently. The result of this study is as following. Alienation doesn't come from one's interior but his exterior. Society and history are major agents of alienation. The injustice of life imposed on the caracters results from political and religious underdevelopment, cultural lowness, absence of social security system and of universal education at that time. The conquest of Quebec by England left a deep and historical wound on the French Canadians. This fact is, in my opinion, one of the essential themes of Anne $H{\acute{e}}bert^{\prime}s$ novels. In spite of all these alienating situations, the reactions showed by the caracters of the novels are limited to escapist illusion, self-destruction, mistaken revenge, eternal submission, etc. In conclusion, Le Torrent by Anne $H{\acute{e}}bert$ which deeply approached themes of violence and alienation could be called authentic landscape of the inner world of Quebecois before 'la Revolution tranquille.'

Sentiment Analysis and Data Visualization of U.S. Public Companies' Disclosures using BERT (BERT를 활용한 미국 기업 공시에 대한 감성 분석 및 시각화)

  • Kim, Hyo Gon;Yoo, Dong Hee
    • The Journal of Information Systems
    • /
    • v.31 no.3
    • /
    • pp.67-87
    • /
    • 2022
  • Purpose This study quantified companies' views on the COVID-19 pandemic with sentiment analysis of U.S. public companies' disclosures. It aims to provide timely insights to shareholders, investors, and consumers by analyzing and visualizing sentiment changes over time as well as similarities and differences by industry. Design/methodology/approach From more than fifty thousand Form 10-K and Form 10-Q published between 2020 and 2021, we extracted over one million texts related to the COVID-19 pandemic. Using the FinBERT language model fine-tuned in the finance domain, we conducted sentiment analysis of the texts, and we quantified and classified the data into positive, negative, and neutral. In addition, we illustrated the analysis results using various visualization techniques for easy understanding of information. Findings The analysis results indicated that U.S. public companies' overall sentiment changed over time as the COVID-19 pandemic progressed. Positive sentiment gradually increased, and negative sentiment tended to decrease over time, but there was no trend in neutral sentiment. When comparing sentiment by industry, the pattern of changes in the amount of positive and negative sentiment and time-series changes were similar in all industries, but differences among industries were shown in neutral sentiment.

Towards Improving Causality Mining using BERT with Multi-level Feature Networks

  • Ali, Wajid;Zuo, Wanli;Ali, Rahman;Rahman, Gohar;Zuo, Xianglin;Ullah, Inam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.10
    • /
    • pp.3230-3255
    • /
    • 2022
  • Causality mining in NLP is a significant area of interest, which benefits in many daily life applications, including decision making, business risk management, question answering, future event prediction, scenario generation, and information retrieval. Mining those causalities was a challenging and open problem for the prior non-statistical and statistical techniques using web sources that required hand-crafted linguistics patterns for feature engineering, which were subject to domain knowledge and required much human effort. Those studies overlooked implicit, ambiguous, and heterogeneous causality and focused on explicit causality mining. In contrast to statistical and non-statistical approaches, we present Bidirectional Encoder Representations from Transformers (BERT) integrated with Multi-level Feature Networks (MFN) for causality recognition, called BERT+MFN for causality recognition in noisy and informal web datasets without human-designed features. In our model, MFN consists of a three-column knowledge-oriented network (TC-KN), bi-LSTM, and Relation Network (RN) that mine causality information at the segment level. BERT captures semantic features at the word level. We perform experiments on Alternative Lexicalization (AltLexes) datasets. The experimental outcomes show that our model outperforms baseline causality and text mining techniques.

Comparative Study of Keyword Extraction Models in Biomedical Domain (생의학 분야 키워드 추출 모델에 대한 비교 연구)

  • Donghee Lee;Soonchan Kwon;Beakcheol Jang
    • Journal of Internet Computing and Services
    • /
    • v.24 no.4
    • /
    • pp.77-84
    • /
    • 2023
  • Given the growing volume of biomedical papers, the ability to efficiently extract keywords has become crucial for accessing and responding to important information in the literature. In this study, we conduct a comprehensive evaluation of different unsupervised learning-based models and BERT-based models for keyword extraction in the biomedical field. Our experimental findings reveal that the BioBERT model, trained on biomedical-specific data, achieves the highest performance. This study offers precise and dependable insights to guide forthcoming research in biomedical keyword extraction. By establishing a well-suited experimental framework and conducting thorough comparisons and analyses of diverse models, we have furnished essential information. Furthermore, we anticipate extending our contributions to other domains by providing comparative experiments and practical guidelines for effective keyword extraction.

Layerwise Semantic Role Labeling in KRBERT (KRBERT 임베딩 층에 따른 의미역 결정)

  • Seo, Hye-Jin;Park, Myung-Kwan;Kim, Euhee
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.617-621
    • /
    • 2021
  • 의미역 결정은 문장 속에서 서술어와 그 논항의 관계를 파악하며, '누가, 무엇을, 어떻게, 왜' 등과 같은 의미역 관계를 찾아내는 자연어 처리 기법이다. 최근 수행되고 있는 의미역 결정 연구는 주로 말뭉치를 활용하여 딥러닝 학습을 하는 방식으로 연구가 이루어지고 있다. 최근 구글에서 개발한 사전 훈련된 Bidirectional Encoder Representations from Transformers (BERT) 모델이 다양한 자연어 처리 분야에서 상당히 높은 성능을 보이고 있다. 본 논문에서는 한국어 의미역 결정 성능 향상을 위해 한국어의 언어적 특징을 고려하며 사전 학습된 SNU KR-BERT를 사용하면서 한국어 의미역 결정 모델의 성능을 살펴보였다. 또한, 본 논문에서는 BERT 모델에서 과연 어떤 히든 레이어(hidden layer)에서 한국어 의미역 결정을 더 잘 수행하는지 알아보고자 하였다. 실험 결과 마지막 히든 레이어 임베딩을 활용하였을 때, 언어 모델의 성능은 66.4% 였다. 히든 레이어 별 언어 모델 성능을 비교한 결과, 마지막 4개의 히든 레이어를 이었을 때(concatenated), 언어 모델의 성능은 67.9% 이였으며, 11번째 히든 레이어를 사용했을 때는 68.1% 이였다. 즉, 마지막 히든 레이어를 선택했을 때보다 더 성능이 좋았다는 것을 알 수 있었다. 하지만 각 언어 모델 별 히트맵을 그려보았을 때는 마지막 히든 레이어 임베딩을 활용한 언어 모델이 더 정확히 의미역 판단을 한다는 것을 알 수 있었다.

  • PDF

Dual-scale BERT using multi-trait representations for holistic and trait-specific essay grading

  • Minsoo Cho;Jin-Xia Huang;Oh-Woog Kwon
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.82-95
    • /
    • 2024
  • As automated essay scoring (AES) has progressed from handcrafted techniques to deep learning, holistic scoring capabilities have merged. However, specific trait assessment remains a challenge because of the limited depth of earlier methods in modeling dual assessments for holistic and multi-trait tasks. To overcome this challenge, we explore providing comprehensive feedback while modeling the interconnections between holistic and trait representations. We introduce the DualBERT-Trans-CNN model, which combines transformer-based representations with a novel dual-scale bidirectional encoder representations from transformers (BERT) encoding approach at the document-level. By explicitly leveraging multi-trait representations in a multi-task learning (MTL) framework, our DualBERT-Trans-CNN emphasizes the interrelation between holistic and trait-based score predictions, aiming for improved accuracy. For validation, we conducted extensive tests on the ASAP++ and TOEFL11 datasets. Against models of the same MTL setting, ours showed a 2.0% increase in its holistic score. Additionally, compared with single-task learning (STL) models, ours demonstrated a 3.6% enhancement in average multi-trait performance on the ASAP++ dataset.

E-commerce data based Sentiment Analysis Model Implementation using Natural Language Processing Model (자연어처리 모델을 이용한 이커머스 데이터 기반 감성 분석 모델 구축)

  • Choi, Jun-Young;Lim, Heui-Seok
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.11
    • /
    • pp.33-39
    • /
    • 2020
  • In the field of Natural Language Processing, Various research such as Translation, POS Tagging, Q&A, and Sentiment Analysis are globally being carried out. Sentiment Analysis shows high classification performance for English single-domain datasets by pretrained sentence embedding models. In this thesis, the classification performance is compared by Korean E-commerce online dataset with various domain attributes and 6 Neural-Net models are built as BOW (Bag Of Word), LSTM[1], Attention, CNN[2], ELMo[3], and BERT(KoBERT)[4]. It has been confirmed that the performance of pretrained sentence embedding models are higher than word embedding models. In addition, practical Neural-Net model composition is proposed after comparing classification performance on dataset with 17 categories. Furthermore, the way of compressing sentence embedding model is mentioned as future work, considering inference time against model capacity on real-time service.

Deep Learning-based Target Masking Scheme for Understanding Meaning of Newly Coined Words

  • Nam, Gun-Min;Kim, Namgyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.10
    • /
    • pp.157-165
    • /
    • 2021
  • Recently, studies using deep learning to analyze a large amount of text are being actively conducted. In particular, a pre-trained language model that applies the learning results of a large amount of text to the analysis of a specific domain text is attracting attention. Among various pre-trained language models, BERT(Bidirectional Encoder Representations from Transformers)-based model is the most widely used. Recently, research to improve the performance of analysis is being conducted through further pre-training using BERT's MLM(Masked Language Model). However, the traditional MLM has difficulties in clearly understands the meaning of sentences containing new words such as newly coined words. Therefore, in this study, we newly propose NTM(Newly coined words Target Masking), which performs masking only on new words. As a result of analyzing about 700,000 movie reviews of portal 'N' by applying the proposed methodology, it was confirmed that the proposed NTM showed superior performance in terms of accuracy of sensitivity analysis compared to the existing random masking.

A Deep Learning Model for Disaster Alerts Classification

  • Park, Soonwook;Jun, Hyeyoon;Kim, Yoonsoo;Lee, Soowon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.12
    • /
    • pp.1-9
    • /
    • 2021
  • Disaster alerts are text messages sent by government to people in the area in the event of a disaster. Since the number of disaster alerts has increased, the number of people who block disaster alerts is increasing as many unnecessary disaster alerts are being received. To solve this problem, this study proposes a deep learning model that automatically classifies disaster alerts by disaster type, and allows only necessary disaster alerts to be received according to the recipient. The proposed model embeds disaster alerts via KoBERT and classifies them by disaster type with LSTM. As a result of classifying disaster alerts using 3 combinations of parts of speech: [Noun], [Noun + Adjective + Verb] and [All parts], and 4 classification models: Proposed model, Keyword classification, Word2Vec + 1D-CNN and KoBERT + FFNN, the proposed model achieved the highest performance with 0.988954 accuracy.

An Intelligent Chatbot Utilizing BERT Model and Knowledge Graph (BERT 모델과 지식 그래프를 활용한 지능형 챗봇)

  • Yoo, SoYeop;Jeong, OkRan
    • The Journal of Society for e-Business Studies
    • /
    • v.24 no.3
    • /
    • pp.87-98
    • /
    • 2019
  • As artificial intelligence is actively studied, it is being applied to various fields such as image, video and natural language processing. The natural language processing, in particular, is being studied to enable computers to understand the languages spoken and spoken by people and is considered one of the most important areas in artificial intelligence technology. In natural language processing, it is a complex, but important to make computers learn to understand a person's common sense and generate results based on the person's common sense. Knowledge graphs, which are linked using the relationship of words, have the advantage of being able to learn common sense easily from computers. However, the existing knowledge graphs are organized only by focusing on specific languages and fields and have limitations that cannot respond to neologisms. In this paper, we propose an intelligent chatbotsystem that collects and analyzed data in real time to build an automatically scalable knowledge graph and utilizes it as the base data. In particular, the fine-tuned BERT-based for relation extraction is to be applied to auto-growing graph to improve performance. And, we have developed a chatbot that can learn human common sense using auto-growing knowledge graph, it verifies the availability and performance of the knowledge graph.