• Title/Summary/Keyword: Natural Language Inference

Search Result 42, Processing Time 0.025 seconds

E-commerce data based Sentiment Analysis Model Implementation using Natural Language Processing Model (자연어처리 모델을 이용한 이커머스 데이터 기반 감성 분석 모델 구축)

  • Choi, Jun-Young;Lim, Heui-Seok
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.11
    • /
    • pp.33-39
    • /
    • 2020
  • In the field of Natural Language Processing, Various research such as Translation, POS Tagging, Q&A, and Sentiment Analysis are globally being carried out. Sentiment Analysis shows high classification performance for English single-domain datasets by pretrained sentence embedding models. In this thesis, the classification performance is compared by Korean E-commerce online dataset with various domain attributes and 6 Neural-Net models are built as BOW (Bag Of Word), LSTM[1], Attention, CNN[2], ELMo[3], and BERT(KoBERT)[4]. It has been confirmed that the performance of pretrained sentence embedding models are higher than word embedding models. In addition, practical Neural-Net model composition is proposed after comparing classification performance on dataset with 17 categories. Furthermore, the way of compressing sentence embedding model is mentioned as future work, considering inference time against model capacity on real-time service.

Bilinear Graph Neural Network-Based Reasoning for Multi-Hop Question Answering (다중 홉 질문 응답을 위한 쌍 선형 그래프 신경망 기반 추론)

  • Lee, Sangui;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.8
    • /
    • pp.243-250
    • /
    • 2020
  • Knowledge graph-based question answering not only requires deep understanding of the given natural language questions, but it also needs effective reasoning to find the correct answers on a large knowledge graph. In this paper, we propose a deep neural network model for effective reasoning on a knowledge graph, which can find correct answers to complex questions requiring multi-hop inference. The proposed model makes use of highly expressive bilinear graph neural network (BGNN), which can utilize context information between a pair of neighboring nodes, as well as allows bidirectional feature propagation between each entity node and one of its neighboring nodes on a knowledge graph. Performing experiments with an open-domain knowledge base (Freebase) and two natural-language question answering benchmark datasets(WebQuestionsSP and MetaQA), we demonstrate the effectiveness and performance of the proposed model.

Comparison of the Power of Bootstrap Two-Sample Test and Wilcoxon Rank Sum Test for Positively Skewed Population

  • Heo, Sunyeong
    • Journal of Integrative Natural Science
    • /
    • v.15 no.1
    • /
    • pp.9-18
    • /
    • 2022
  • This research examines the power of bootstrap two-sample test, and compares it with the powers of two-sample t-test and Wilcoxon rank sum test, through simulation. For simulation work, a positively skewed and heavy tailed distribution was selected as a population distribution, the chi-square distributions with three degrees of freedom, χ23. For two independent samples, the fist sample was selected from χ23. The second sample was selected independently from the same χ23 as the first sample, and calculated d+ax for each sampled value x, a randomly selected value from χ23. The d in d+ax has from 0 to 5 by 0.5 interval, and the a has from 1.0 to 1.5 by 0.1 interval. The powers of three methods were evaluated for the sample sizes 10,20,30,40,50. The null hypothesis was the two population medians being equal for Bootstrap two-sample test and Wilcoxon rank sum test, and the two population means being equal for the two-sample t-test. The powers were obtained using r program language; wilcox.test() in r base package for Wilcoxon rank sum test, t.test() in r base package for the two-sample t-test, boot.two.bca() in r wBoot pacakge for the bootstrap two-sample test. Simulation results show that the power of Wilcoxon rank sum test is the best for all 330 (n,a,d) combinations and the power of two-sample t-test comes next, and the power of bootstrap two-sample comes last. As the results, it can be recommended to use the classic inference methods if there are widely accepted and used methods, in terms of time, costs, sometimes power.

Modern Methods of Text Analysis as an Effective Way to Combat Plagiarism

  • Myronenko, Serhii;Myronenko, Yelyzaveta
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.8
    • /
    • pp.242-248
    • /
    • 2022
  • The article presents the analysis of modern methods of automatic comparison of original and unoriginal text to detect textual plagiarism. The study covers two types of plagiarism - literal, when plagiarists directly make exact copying of the text without changing anything, and intelligent, using more sophisticated techniques, which are harder to detect due to the text manipulation, like words and signs replacement. Standard techniques related to extrinsic detection are string-based, vector space and semantic-based. The first, most common and most successful target models for detecting literal plagiarism - N-gram and Vector Space are analyzed, and their advantages and disadvantages are evaluated. The most effective target models that allow detecting intelligent plagiarism, particularly identifying paraphrases by measuring the semantic similarity of short components of the text, are investigated. Models using neural network architecture and based on natural language sentence matching approaches such as Densely Interactive Inference Network (DIIN), Bilateral Multi-Perspective Matching (BiMPM) and Bidirectional Encoder Representations from Transformers (BERT) and its family of models are considered. The progress in improving plagiarism detection systems, techniques and related models is summarized. Relevant and urgent problems that remain unresolved in detecting intelligent plagiarism - effective recognition of unoriginal ideas and qualitatively paraphrased text - are outlined.

Inference of Korean Public Sentiment from Online News (온라인 뉴스에 대한 한국 대중의 감정 예측)

  • Matteson, Andrew Stuart;Choi, Soon-Young;Lim, Heui-Seok
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.7
    • /
    • pp.25-31
    • /
    • 2018
  • Online news has replaced the traditional newspaper and has brought about a profound transformation in the way we access and share information. News websites have had the ability for users to post comments for quite some time, and some have also begun to crowdsource reactions to news articles. The field of sentiment analysis seeks to computationally model the emotions and reactions experienced when presented with text. In this work, we analyze more than 100,000 news articles over ten categories with five user-generated emotional annotations to determine whether or not these reactions have a mathematical correlation to the news body text and propose a simple sentiment analysis algorithm that requires minimal preprocessing and no machine learning. We show that it is effective even for a morphologically complex language like Korean.

Fake News Detection Using CNN-based Sentiment Change Patterns (CNN 기반 감성 변화 패턴을 이용한 가짜뉴스 탐지)

  • Tae Won Lee;Ji Su Park;Jin Gon Shon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.4
    • /
    • pp.179-188
    • /
    • 2023
  • Recently, fake news disguises the form of news content and appears whenever important events occur, causing social confusion. Accordingly, artificial intelligence technology is used as a research to detect fake news. Fake news detection approaches such as automatically recognizing and blocking fake news through natural language processing or detecting social media influencer accounts that spread false information by combining with network causal inference could be implemented through deep learning. However, fake news detection is classified as a difficult problem to solve among many natural language processing fields. Due to the variety of forms and expressions of fake news, the difficulty of feature extraction is high, and there are various limitations, such as that one feature may have different meanings depending on the category to which the news belongs. In this paper, emotional change patterns are presented as an additional identification criterion for detecting fake news. We propose a model with improved performance by applying a convolutional neural network to a fake news data set to perform analysis based on content characteristics and additionally analyze emotional change patterns. Sentimental polarity is calculated for the sentences constituting the news and the result value dependent on the sentence order can be obtained by applying long-term and short-term memory. This is defined as a pattern of emotional change and combined with the content characteristics of news to be used as an independent variable in the proposed model for fake news detection. We train the proposed model and comparison model by deep learning and conduct an experiment using a fake news data set to confirm that emotion change patterns can improve fake news detection performance.

Aspect-Based Sentiment Analysis Using BERT: Developing Aspect Category Sentiment Classification Models (BERT를 활용한 속성기반 감성분석: 속성카테고리 감성분류 모델 개발)

  • Park, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.1-25
    • /
    • 2020
  • Sentiment Analysis (SA) is a Natural Language Processing (NLP) task that analyzes the sentiments consumers or the public feel about an arbitrary object from written texts. Furthermore, Aspect-Based Sentiment Analysis (ABSA) is a fine-grained analysis of the sentiments towards each aspect of an object. Since having a more practical value in terms of business, ABSA is drawing attention from both academic and industrial organizations. When there is a review that says "The restaurant is expensive but the food is really fantastic", for example, the general SA evaluates the overall sentiment towards the 'restaurant' as 'positive', while ABSA identifies the restaurant's aspect 'price' as 'negative' and 'food' aspect as 'positive'. Thus, ABSA enables a more specific and effective marketing strategy. In order to perform ABSA, it is necessary to identify what are the aspect terms or aspect categories included in the text, and judge the sentiments towards them. Accordingly, there exist four main areas in ABSA; aspect term extraction, aspect category detection, Aspect Term Sentiment Classification (ATSC), and Aspect Category Sentiment Classification (ACSC). It is usually conducted by extracting aspect terms and then performing ATSC to analyze sentiments for the given aspect terms, or by extracting aspect categories and then performing ACSC to analyze sentiments for the given aspect category. Here, an aspect category is expressed in one or more aspect terms, or indirectly inferred by other words. In the preceding example sentence, 'price' and 'food' are both aspect categories, and the aspect category 'food' is expressed by the aspect term 'food' included in the review. If the review sentence includes 'pasta', 'steak', or 'grilled chicken special', these can all be aspect terms for the aspect category 'food'. As such, an aspect category referred to by one or more specific aspect terms is called an explicit aspect. On the other hand, the aspect category like 'price', which does not have any specific aspect terms but can be indirectly guessed with an emotional word 'expensive,' is called an implicit aspect. So far, the 'aspect category' has been used to avoid confusion about 'aspect term'. From now on, we will consider 'aspect category' and 'aspect' as the same concept and use the word 'aspect' more for convenience. And one thing to note is that ATSC analyzes the sentiment towards given aspect terms, so it deals only with explicit aspects, and ACSC treats not only explicit aspects but also implicit aspects. This study seeks to find answers to the following issues ignored in the previous studies when applying the BERT pre-trained language model to ACSC and derives superior ACSC models. First, is it more effective to reflect the output vector of tokens for aspect categories than to use only the final output vector of [CLS] token as a classification vector? Second, is there any performance difference between QA (Question Answering) and NLI (Natural Language Inference) types in the sentence-pair configuration of input data? Third, is there any performance difference according to the order of sentence including aspect category in the QA or NLI type sentence-pair configuration of input data? To achieve these research objectives, we implemented 12 ACSC models and conducted experiments on 4 English benchmark datasets. As a result, ACSC models that provide performance beyond the existing studies without expanding the training dataset were derived. In addition, it was found that it is more effective to reflect the output vector of the aspect category token than to use only the output vector for the [CLS] token as a classification vector. It was also found that QA type input generally provides better performance than NLI, and the order of the sentence with the aspect category in QA type is irrelevant with performance. There may be some differences depending on the characteristics of the dataset, but when using NLI type sentence-pair input, placing the sentence containing the aspect category second seems to provide better performance. The new methodology for designing the ACSC model used in this study could be similarly applied to other studies such as ATSC.

Plan-based Ellipsis Resolution for Utterances in Noun-Phrase-Form in Restricted Domain Dialogues (제한된 영역의 대화에서 체언구 형태의 발화 이해를 위한 계획기반 생략 처리)

  • 윤철진;서정연
    • Korean Journal of Cognitive Science
    • /
    • v.11 no.1
    • /
    • pp.81-92
    • /
    • 2000
  • Elliptical fragments are common in natural language dialogues between humans. Since most elliptical fragments should be interpeted within the context. it is not easy for computers to recognize the speaker's intention from the elliptical fragments. In t this paper we propose a model to recognize speaker's intention from elliptical fragments 1 in Korean by expanding the tripartite plan-based model proposed by Lambert. We add new discourse recipes to define user's discourse actions through elliptical fragments. In order to use plan inference process. we must represent utterances as actions. e. g .. r e elliptical fragments are represented as surface speech acts. In surface speech act representation. we include the information of 'Josa' (case markers in Korean), because t the information of 'Josa' plays a very important role in analysing speakers' intention in Korean. Finally. by using an object and discourse focus theory, the system can recognize the intention that a user is trying to compare between two plans by uttering elliptical fragments

  • PDF

Hyperparameter Search for Facies Classification with Bayesian Optimization (베이지안 최적화를 이용한 암상 분류 모델의 하이퍼 파라미터 탐색)

  • Choi, Yonguk;Yoon, Daeung;Choi, Junhwan;Byun, Joongmoo
    • Geophysics and Geophysical Exploration
    • /
    • v.23 no.3
    • /
    • pp.157-167
    • /
    • 2020
  • With the recent advancement of computer hardware and the contribution of open source libraries to facilitate access to artificial intelligence technology, the use of machine learning (ML) and deep learning (DL) technologies in various fields of exploration geophysics has increased. In addition, ML researchers have developed complex algorithms to improve the inference accuracy of various tasks such as image, video, voice, and natural language processing, and now they are expanding their interests into the field of automatic machine learning (AutoML). AutoML can be divided into three areas: feature engineering, architecture search, and hyperparameter search. Among them, this paper focuses on hyperparamter search with Bayesian optimization, and applies it to the problem of facies classification using seismic data and well logs. The effectiveness of the Bayesian optimization technique has been demonstrated using Vincent field data by comparing with the results of the random search technique.

A Semantic Similarity Decision Using Ontology Model Base On New N-ary Relation Design (새로운 N-ary 관계 디자인 기반의 온톨로지 모델을 이용한 문장의미결정)

  • Kim, Su-Kyoung;Ahn, Kee-Hong;Choi, Ho-Jin
    • Journal of the Korean Society for information Management
    • /
    • v.25 no.4
    • /
    • pp.43-66
    • /
    • 2008
  • Currently be proceeded a lot of researchers for 'user information demand description' for interface of an information retrieval system or Web search engines, but user information demand description for a natural language form is a difficult situation. These reasons are as they cannot provide the semantic similarity that an information retrieval model can be completely satisfied with variety regarding an information demand expression and semantic relevance for user information description. Therefore, this study using the description logic that is a knowledge representation base of OWL and a vector model-based weight between concept, and to be able to satisfy variety regarding an information demand expression and semantic relevance proposes a decision way for perfect assistances of user information demand description. The experiment results by proposed method, semantic similarity of a polyseme and a synonym showed with excellent performance in decision.