• Title/Summary/Keyword: 문장 벡터

Search Result 146, Processing Time 0.024 seconds

Aspect-Based Sentiment Analysis Using BERT: Developing Aspect Category Sentiment Classification Models (BERT를 활용한 속성기반 감성분석: 속성카테고리 감성분류 모델 개발)

  • Park, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.1-25
    • /
    • 2020
  • Sentiment Analysis (SA) is a Natural Language Processing (NLP) task that analyzes the sentiments consumers or the public feel about an arbitrary object from written texts. Furthermore, Aspect-Based Sentiment Analysis (ABSA) is a fine-grained analysis of the sentiments towards each aspect of an object. Since having a more practical value in terms of business, ABSA is drawing attention from both academic and industrial organizations. When there is a review that says "The restaurant is expensive but the food is really fantastic", for example, the general SA evaluates the overall sentiment towards the 'restaurant' as 'positive', while ABSA identifies the restaurant's aspect 'price' as 'negative' and 'food' aspect as 'positive'. Thus, ABSA enables a more specific and effective marketing strategy. In order to perform ABSA, it is necessary to identify what are the aspect terms or aspect categories included in the text, and judge the sentiments towards them. Accordingly, there exist four main areas in ABSA; aspect term extraction, aspect category detection, Aspect Term Sentiment Classification (ATSC), and Aspect Category Sentiment Classification (ACSC). It is usually conducted by extracting aspect terms and then performing ATSC to analyze sentiments for the given aspect terms, or by extracting aspect categories and then performing ACSC to analyze sentiments for the given aspect category. Here, an aspect category is expressed in one or more aspect terms, or indirectly inferred by other words. In the preceding example sentence, 'price' and 'food' are both aspect categories, and the aspect category 'food' is expressed by the aspect term 'food' included in the review. If the review sentence includes 'pasta', 'steak', or 'grilled chicken special', these can all be aspect terms for the aspect category 'food'. As such, an aspect category referred to by one or more specific aspect terms is called an explicit aspect. On the other hand, the aspect category like 'price', which does not have any specific aspect terms but can be indirectly guessed with an emotional word 'expensive,' is called an implicit aspect. So far, the 'aspect category' has been used to avoid confusion about 'aspect term'. From now on, we will consider 'aspect category' and 'aspect' as the same concept and use the word 'aspect' more for convenience. And one thing to note is that ATSC analyzes the sentiment towards given aspect terms, so it deals only with explicit aspects, and ACSC treats not only explicit aspects but also implicit aspects. This study seeks to find answers to the following issues ignored in the previous studies when applying the BERT pre-trained language model to ACSC and derives superior ACSC models. First, is it more effective to reflect the output vector of tokens for aspect categories than to use only the final output vector of [CLS] token as a classification vector? Second, is there any performance difference between QA (Question Answering) and NLI (Natural Language Inference) types in the sentence-pair configuration of input data? Third, is there any performance difference according to the order of sentence including aspect category in the QA or NLI type sentence-pair configuration of input data? To achieve these research objectives, we implemented 12 ACSC models and conducted experiments on 4 English benchmark datasets. As a result, ACSC models that provide performance beyond the existing studies without expanding the training dataset were derived. In addition, it was found that it is more effective to reflect the output vector of the aspect category token than to use only the output vector for the [CLS] token as a classification vector. It was also found that QA type input generally provides better performance than NLI, and the order of the sentence with the aspect category in QA type is irrelevant with performance. There may be some differences depending on the characteristics of the dataset, but when using NLI type sentence-pair input, placing the sentence containing the aspect category second seems to provide better performance. The new methodology for designing the ACSC model used in this study could be similarly applied to other studies such as ATSC.

A Word Embedding used Word Sense and Feature Mirror Model (단어 의미와 자질 거울 모델을 이용한 단어 임베딩)

  • Lee, JuSang;Shin, JoonChoul;Ock, CheolYoung
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.4
    • /
    • pp.226-231
    • /
    • 2017
  • Word representation, an important area in natural language processing(NLP) used machine learning, is a method that represents a word not by text but by distinguishable symbol. Existing word embedding employed a large number of corpora to ensure that words are positioned nearby within text. However corpus-based word embedding needs several corpora because of the frequency of word occurrence and increased number of words. In this paper word embedding is done using dictionary definitions and semantic relationship information(hypernyms and antonyms). Words are trained using the feature mirror model(FMM), a modified Skip-Gram(Word2Vec). Sense similar words have similar vector. Furthermore, it was possible to distinguish vectors of antonym words.

Comparison of feature parameters for emotion recognition using speech signal (음성 신호를 사용한 감정인식의 특징 파라메터 비교)

  • 김원구
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.5
    • /
    • pp.371-377
    • /
    • 2003
  • In this paper, comparison of feature parameters for emotion recognition using speech signal is studied. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy and phonetic feature such as MFCC parameters. In order to evaluate the performance of feature parameters speaker and context independent emotion recognition system was constructed to make experiment. In the experiments, pitch, energy parameters and their derivatives were used as a prosodic information and MFCC parameters and its derivative were used as phonetic information. Experimental results using vector quantization based emotion recognition system showed that recognition system using MFCC parameter and its derivative showed better performance than that using the pitch and energy parameters.

A Semantic Orientation Prediction Method of Sentiment Features Based on the General and Domain-Dependent Characteristics (일반적, 영역 의존적 특성을 반영한 감정 자질의 의미지향성 추정 방법)

  • Hwang, Jaewon;Ko, Youngjoong
    • Annual Conference on Human and Language Technology
    • /
    • 2009.10a
    • /
    • pp.155-159
    • /
    • 2009
  • 본 논문은 한국어 문서 감정분류를 위한 중요한 어휘 자원인 감정자질(Sentiment Feature)의 의미지향성(Semantic Orientation) 추정을 위해 일반적인 특성과 영역(Domain) 의존적인 특성을 반영하여 한국어 문서 감정분류(Sentiment Classification)의 성능 향상을 얻을 수 있는 기법을 제안한다. 감정자질의 의미지 향성은 검색 엔진을 통해 추출한 각 감정 자질의 스니핏(Snippet)과 실험 말뭉치를 이용하여 추정할 수 있다. 검색 엔진을 통해 추출된 스니핏은 감정자질의 일반적인 특성을 반영하며, 실험 말뭉치는 분류하고자 하는 영역 의존적인 특성을 반영한다. 이렇게 얻어진 감정자질의 의미지향성 수치는 각 문장의 감정강도를 추정하기 위해 이용되며, 문장의 감정 강도의 값을 TF-IDF 가중치 기법에 접목하여 감정자질의 가중치를 책정한다. 최종적으로 학습 과정에서 긍정 문서에서는 긍정 감정자질, 부정 문서에서는 부정 감정자질을 대상으로 추가 가중치를 부여하여 학습하였다. 본 논문에서는 문서 분류에 뛰어난 성능을 보여주는 지지 벡터 기계(Support Vector Machine)를 사용하여 제안한 방법의 성능을 평가한다. 평가 결과, 일반적인 정보 검색에서 사용하는 내용어(Content Word) 기반의 자질을 사용한 경우보다 3.1%의 성능향상을 보였다.

  • PDF

On the Program Conversion and Conditional Simplification for VECTRAN Code (백트란 코드화를 위한 프로그램 변환과 단순화)

  • Hwang, Seon-Myeong;Kim, Haeng-Gon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.1
    • /
    • pp.38-49
    • /
    • 1994
  • One of the most common problems encountered in the automatic translation of FORTRAN source code to VECTRAN is the occurrence of conditional transfer of control within loops. Transfers of control create control dependencies, in which the execution of a statement is dependent on the value of a variable in another statement. In this paper I propose algorithms involve an attempt to convert statements in the loop into conditional assignment statements that can be easily analyzed for data dependency, and this paper presents a simplification method for conditional assignment statement. Especially, I propose not only a method for simplifying boolean functions but extended method for n-state functions.

  • PDF

A comparative study of Entity-Grid and LSA models on Korean sentence ordering (한국어 텍스트 문장정렬을 위한 개체격자 접근법과 LSA 기반 접근법의 활용연구)

  • Kim, Youngsam;Kim, Hong-Gee;Shin, Hyopil
    • Korean Journal of Cognitive Science
    • /
    • v.24 no.4
    • /
    • pp.301-321
    • /
    • 2013
  • For the task of sentence ordering, this paper attempts to utilize the Entity-Grid model, a type of entity-based modeling approach, as well as Latent Semantic analysis, which is based on vector space modeling, The task is well known as one of the fundamental tools used to measure text coherence and to enhance text generation processes. For the implementation of the Entity-Grid model, we attempt to use the syntactic roles of the nouns in the Korean text for the ordering task, and measure its impact on the result, since its contribution has been discussed in previous research. Contrary to the case of German, it shows a positive result. In order to obtain the information on the syntactic roles, we use a strategy of using Korean case-markers for the nouns. As a result, it is revealed that the cues can be helpful to measure text coherence. In addition, we compare the results with the ones of the LSA-based model, discussing the advantages and disadvantages of the models, and options for future studies.

  • PDF

Automatic Product Review Helpfulness Estimation based on Review Information Types (상품평의 정보 분류에 기반한 자동 상품평 유용성 평가)

  • Kim, Munhyong;Shin, Hyopil
    • Journal of KIISE
    • /
    • v.43 no.9
    • /
    • pp.983-997
    • /
    • 2016
  • Many available online product reviews for any given product makes it difficult for a consumer to locate the helpful reviews. The purpose of this study was to investigate automatic helpfulness evaluation of online product reviews according to review information types based on the target of information. The underlying assumption was that consumers find reviews containing specific information related to the product itself or the reliability of reviewers more helpful than peripheral information, such as shipping or customer service. Therefore, each sentence was categorized by given information types, which reduced the semantic space of review sentences. Subsequently, we extracted specific information from sentences by using a topic-based representation of the sentences and a clustering algorithm. Review ranking experiments indicated more effective results than other comparable approaches.

Multi-Dimensional Emotion Recognition Model of Counseling Chatbot (상담 챗봇의 다차원 감정 인식 모델)

  • Lim, Myung Jin;Yi, Moung Ho;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.10 no.4
    • /
    • pp.21-27
    • /
    • 2021
  • Recently, the importance of counseling is increasing due to the Corona Blue caused by COVID-19. Also, with the increase of non-face-to-face services, researches on chatbots that have changed the counseling media are being actively conducted. In non-face-to-face counseling through chatbot, it is most important to accurately understand the client's emotions. However, since there is a limit to recognizing emotions only in sentences written by the client, it is necessary to recognize the dimensional emotions embedded in the sentences for more accurate emotion recognition. Therefore, in this paper, the vector and sentence VAD (Valence, Arousal, Dominance) generated by learning the Word2Vec model after correcting the original data according to the characteristics of the data are learned using a deep learning algorithm to learn the multi-dimensional We propose an emotion recognition model. As a result of comparing three deep learning models as a method to verify the usefulness of the proposed model, R-squared showed the best performance with 0.8484 when the attention model is used.

Design and Implementation of Short-Essay Marking System by Using Semantic Kernel and WordNet (의미 커널과 워드넷을 이용한 주관식 문제 채점 시스템의 설계 및 구현)

  • Cho, Woo-Jin;Chu, Seung-Woo;O, Jeong-Seok;Kim, Han-Saem;Kim, Yu-Seop;Lee, Jae-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.05a
    • /
    • pp.1027-1030
    • /
    • 2005
  • 기존 의미커널을 적용한 주관식 채점 시스템은 여러 답안과 말뭉치에서 추출한 색인어들과의 상관관계를 벡터방식으로 표현하여 자연어 처리에 대한 문제를 해결하려 하였다. 본 논문에서는 기존 시스템의 답안 및 색인어의 표현 한계로 인한 유사도 계산오차 가능성에 대한 문제를 해결하고자 시소러스를 이용한 임의 추출 방식의 답안 확장을 적용하였다. 서술형 주관식 평가에서는 문장의 문맥보다는 사용된 어휘에 채점가중치가 높다는 점을 착안, 출제자와 수험자 모두의 답안을 동의어, 유의어 그룹으로 확장하여 채점 성능을 향상시키려 하였다. 우선 두 답안을 형태소 분석기를 이용해 색인어를 추출한 후 워드넷을 이용하여 동의어, 유의어 그룹으로 확장한다. 이들을 말뭉치 색인을 이용하여 단어들 간 상관관계를 측정하기 위한 벡터로 구성하고 의미 커널을 적용하여 정답 유사도를 계산하였다. 출제자의 채점결과와 각 모델의 채점 점수의 상관계수 계산 결과 ELSA 모델이 가장 높은 유사도를 나타내었다..

  • PDF

A Study on Automatic Phoneme Segmentation of Continuous Speech Using Acoustic and Phonetic Information (음향 및 음소 정보를 이용한 연속제의 자동 음소 분할에 대한 연구)

  • 박은영;김상훈;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.1
    • /
    • pp.4-10
    • /
    • 2000
  • The work presented in this paper is about a postprocessor, which improves the performance of automatic speech segmentation system by correcting the phoneme boundary errors. We propose a postprocessor that reduces the range of errors in the auto labeled results that are ready to be used directly as synthesis unit. Starting from a baseline automatic segmentation system, our proposed postprocessor trains the features of hand labeled results using multi-layer perceptron(MLP) algorithm. Then, the auto labeled result combined with MLP postprocessor determines the new phoneme boundary. The details are as following. First, we select the feature sets of speech, based on the acoustic phonetic knowledge. And then we have adopted the MLP as pattern classifier because of its excellent nonlinear discrimination capability. Moreover, it is easy for MLP to reflect fully the various types of acoustic features appearing at the phoneme boundaries within a short time. At the last procedure, an appropriate feature set analyzed about each phonetic event is applied to our proposed postprocessor to compensate the phoneme boundary error. For phonetically rich sentences data, we have achieved 19.9 % improvement for the frame accuracy, comparing with the performance of plain automatic labeling system. Also, we could reduce the absolute error rate about 28.6%.

  • PDF