• Title/Summary/Keyword: text vector

Search Result 284, Processing Time 0.026 seconds

Multidimensional Analysis of Consumers' Opinions from Online Product Reviews

  • Taewook Kim;Dong Sung Kim;Donghyun Kim;Jong Woo Kim
    • Asia pacific journal of information systems
    • /
    • v.29 no.4
    • /
    • pp.838-855
    • /
    • 2019
  • Online product reviews are a vital source for companies in that they contain consumers' opinions of products. The earlier methods of opinion mining, which involve drawing semantic information from text, have been mostly applied in one dimension. This is not sufficient in itself to elicit reviewers' comprehensive views on products. In this paper, we propose a novel approach in opinion mining by projecting online consumers' reviews in a multidimensional framework to improve review interpretation of products. First of all, we set up a new framework consisting of six dimensions based on a marketing management theory. To calculate the distances of review sentences and each dimension, we embed words in reviews utilizing Google's pre-trained word2vector model. We classified each sentence of the reviews into the respective dimensions of our new framework. After the classification, we measured the sentiment degrees for each sentence. The results were plotted using a radar graph in which the axes are the dimensions of the framework. We tested the strategy on Amazon product reviews of the iPhone and Galaxy smartphone series with a total of around 21,000 sentences. The results showed that the radar graphs visually reflected several issues associated with the products. The proposed method is not for specific product categories. It can be generally applied for opinion mining on reviews of any product category.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

Monitoring Mood Trends of Twitter Users using Multi-modal Analysis method of Texts and Images (텍스트 및 영상의 멀티모달분석을 이용한 트위터 사용자의 감성 흐름 모니터링 기술)

  • Kim, Eun Yi;Ko, Eunjeong
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.1
    • /
    • pp.419-431
    • /
    • 2018
  • In this paper, we propose a novel method for monitoring mood trend of Twitter users by analyzing their daily tweets for a long period. Then, to more accurately understand their tweets, we analyze all types of content in tweets, i.e., texts and emoticons, and images, thus develop a multimodal sentiment analysis method. In the proposed method, two single-modal analyses first are performed to extract the users' moods hidden in texts and images: a lexicon-based and learning-based text classifier and a learning-based image classifier. Thereafter, the extracted moods from the respective analyses are combined into a tweet mood and aggregated a daily mood. As a result, the proposed method generates a user daily mood flow graph, which allows us for monitoring the mood trend of users more intuitively. For evaluation, we perform two sets of experiment. First, we collect the data sets of 40,447 data. We evaluate our method via comparing the state-of-the-art techniques. In our experiments, we demonstrate that the proposed multimodal analysis method outperforms other baselines and our own methods using text-based tweets or images only. Furthermore, to evaluate the potential of the proposed method in monitoring users' mood trend, we tested the proposed method with 40 depressive users and 40 normal users. It proves that the proposed method can be effectively used in finding depressed users.

A Korean Document Sentiment Classification System based on Semantic Properties of Sentiment Words (감정 단어의 의미적 특성을 반영한 한국어 문서 감정분류 시스템)

  • Hwang, Jae-Won;Ko, Young-Joong
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.4
    • /
    • pp.317-322
    • /
    • 2010
  • This paper proposes how to improve performance of the Korean document sentiment-classification system using semantic properties of the sentiment words. A sentiment word means a word with sentiment, and sentiment features are defined by a set of the sentiment words which are important lexical resource for the sentiment classification. Sentiment feature represents different sentiment intensity in general field and in specific domain. In general field, we can estimate the sentiment intensity using a snippet from a search engine, while in specific domain, training data can be used for this estimation. When the sentiment intensity of the sentiment features are estimated, it is called semantic orientation and is used to estimate the sentiment intensity of the sentences in the text documents. After estimating sentiment intensity of the sentences, we apply that to the weights of sentiment features. In this paper, we evaluate our system in three different cases such as general, domain-specific, and general/domain-specific semantic orientation using support vector machine. Our experimental results show the improved performance in all cases, and, especially in general/domain-specific semantic orientation, our proposed method performs 3.1% better than a baseline system indexed by only content words.

Effective Picture Search in Lifelog Management Systems using Bluetooth Devices (라이프로그 관리 시스템에서 블루투스 장치를 이용한 효과적인 사진 검색 방법)

  • Chung, Eun-Ho;Lee, Ki-Yong;Kim, Myoung-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.4
    • /
    • pp.383-391
    • /
    • 2010
  • A Lifelog management system provides users with services to store, manage, and search their life logs. This paper proposes a fully-automatic collecting method of real world social contacts and lifelog search engine using collected social contact information as keyword. Wireless short-distance network devices in mobile phones are used to detect social contacts of their users. Human-Bluetooth relationship matrix is built based on the frequency of a human-being and a Bluetooth device being observed at the same time. Results show that with 20% of social contact information out of full social contact information of the observation times used for calculation, 90% of human-Bluetooth relationship can be correctly acquired. A lifelog search-engine that takes human names as keyword is suggested which compares two vectors, a row of Human-Bluetooth matrix and a vector of Bluetooth list scanned while a lifelog was created, using vector information retrieval model. This search engine returns more lifelog than existing text-matching search engine and ranks the result unlike existing search-engine.

Classifying Sub-Categories of Apartment Defect Repair Tasks: A Machine Learning Approach (아파트 하자 보수 시설공사 세부공종 머신러닝 분류 시스템에 관한 연구)

  • Kim, Eunhye;Ji, HongGeun;Kim, Jina;Park, Eunil;Ohm, Jay Y.
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.9
    • /
    • pp.359-366
    • /
    • 2021
  • A number of construction companies in Korea invest considerable human and financial resources to construct a system for managing apartment defect data and for categorizing repair tasks. Thus, this study proposes machine learning models to automatically classify defect complaint text-data into one of the sub categories of 'finishing work' (i.e., one of the defect repair tasks). In the proposed models, we employed two word representation methods (Bag-of-words, Term Frequency-Inverse Document Frequency (TF-IDF)) and two machine learning classifiers (Support Vector Machine, Random Forest). In particular, we conducted both binary- and multi- classification tasks to classify 9 sub categories of finishing work: home appliance installation work, paperwork, painting work, plastering work, interior masonry work, plaster finishing work, indoor furniture installation work, kitchen facility installation work, and tiling work. The machine learning classifiers using the TF-IDF representation method and Random Forest classification achieved more than 90% accuracy, precision, recall, and F1 score. We shed light on the possibility of constructing automated defect classification systems based on the proposed machine learning models.

Predicting Functional Outcomes of Patients With Stroke Using Machine Learning: A Systematic Review (머신러닝을 활용한 뇌졸중 환자의 기능적 결과 예측: 체계적 고찰)

  • Bae, Suyeong;Lee, Mi Jung;Nam, Sanghun;Hong, Ickpyo
    • Therapeutic Science for Rehabilitation
    • /
    • v.11 no.4
    • /
    • pp.23-39
    • /
    • 2022
  • Objective : To summarize clinical and demographic variables and machine learning uses for predicting functional outcomes of patients with stroke. Methods : We searched PubMed, CINAHL and Web of Science to identify published articles from 2010 to 2021. The search terms were "machine learning OR data mining AND stroke AND function OR prediction OR/AND rehabilitation". Articles exclusively using brain imaging techniques, deep learning method and articles without available full text were excluded in this study. Results : Nine articles were selected for this study. Support vector machines (19.05%) and random forests (19.05%) were two most frequently used machine learning models. Five articles (55.56%) demonstrated that the impact of patient initial and/or discharge assessment scores such as modified ranking scale (mRS) or functional independence measure (FIM) on stroke patients' functional outcomes was higher than their clinical characteristics. Conclusions : This study showed that patient initial and/or discharge assessment scores such as mRS or FIM could influence their functional outcomes more than their clinical characteristics. Evaluating and reviewing initial and or discharge functional outcomes of patients with stroke might be required to develop the optimal therapeutic interventions to enhance functional outcomes of patients with stroke.

Aspect-Based Sentiment Analysis Using BERT: Developing Aspect Category Sentiment Classification Models (BERT를 활용한 속성기반 감성분석: 속성카테고리 감성분류 모델 개발)

  • Park, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.1-25
    • /
    • 2020
  • Sentiment Analysis (SA) is a Natural Language Processing (NLP) task that analyzes the sentiments consumers or the public feel about an arbitrary object from written texts. Furthermore, Aspect-Based Sentiment Analysis (ABSA) is a fine-grained analysis of the sentiments towards each aspect of an object. Since having a more practical value in terms of business, ABSA is drawing attention from both academic and industrial organizations. When there is a review that says "The restaurant is expensive but the food is really fantastic", for example, the general SA evaluates the overall sentiment towards the 'restaurant' as 'positive', while ABSA identifies the restaurant's aspect 'price' as 'negative' and 'food' aspect as 'positive'. Thus, ABSA enables a more specific and effective marketing strategy. In order to perform ABSA, it is necessary to identify what are the aspect terms or aspect categories included in the text, and judge the sentiments towards them. Accordingly, there exist four main areas in ABSA; aspect term extraction, aspect category detection, Aspect Term Sentiment Classification (ATSC), and Aspect Category Sentiment Classification (ACSC). It is usually conducted by extracting aspect terms and then performing ATSC to analyze sentiments for the given aspect terms, or by extracting aspect categories and then performing ACSC to analyze sentiments for the given aspect category. Here, an aspect category is expressed in one or more aspect terms, or indirectly inferred by other words. In the preceding example sentence, 'price' and 'food' are both aspect categories, and the aspect category 'food' is expressed by the aspect term 'food' included in the review. If the review sentence includes 'pasta', 'steak', or 'grilled chicken special', these can all be aspect terms for the aspect category 'food'. As such, an aspect category referred to by one or more specific aspect terms is called an explicit aspect. On the other hand, the aspect category like 'price', which does not have any specific aspect terms but can be indirectly guessed with an emotional word 'expensive,' is called an implicit aspect. So far, the 'aspect category' has been used to avoid confusion about 'aspect term'. From now on, we will consider 'aspect category' and 'aspect' as the same concept and use the word 'aspect' more for convenience. And one thing to note is that ATSC analyzes the sentiment towards given aspect terms, so it deals only with explicit aspects, and ACSC treats not only explicit aspects but also implicit aspects. This study seeks to find answers to the following issues ignored in the previous studies when applying the BERT pre-trained language model to ACSC and derives superior ACSC models. First, is it more effective to reflect the output vector of tokens for aspect categories than to use only the final output vector of [CLS] token as a classification vector? Second, is there any performance difference between QA (Question Answering) and NLI (Natural Language Inference) types in the sentence-pair configuration of input data? Third, is there any performance difference according to the order of sentence including aspect category in the QA or NLI type sentence-pair configuration of input data? To achieve these research objectives, we implemented 12 ACSC models and conducted experiments on 4 English benchmark datasets. As a result, ACSC models that provide performance beyond the existing studies without expanding the training dataset were derived. In addition, it was found that it is more effective to reflect the output vector of the aspect category token than to use only the output vector for the [CLS] token as a classification vector. It was also found that QA type input generally provides better performance than NLI, and the order of the sentence with the aspect category in QA type is irrelevant with performance. There may be some differences depending on the characteristics of the dataset, but when using NLI type sentence-pair input, placing the sentence containing the aspect category second seems to provide better performance. The new methodology for designing the ACSC model used in this study could be similarly applied to other studies such as ATSC.

Relation Extraction based on Extended Composite Kernel using Flat Lexical Features (평면적 어휘 자질들을 활용한 확장 혼합 커널 기반 관계 추출)

  • Chai, Sung-Pil;Jeong, Chang-Hoo;Chai, Yun-Soo;Myaeng, Sung-Hyon
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.8
    • /
    • pp.642-652
    • /
    • 2009
  • In order to improve the performance of the existing relation extraction approaches, we propose a method for combining two pivotal concepts which play an important role in classifying semantic relationships between entities in text. Having built a composite kernel-based relation extraction system, which incorporates both entity features and syntactic structured information of relation instances, we define nine classes of lexical features and synthetically apply them to the system. Evaluation on the ACE RDC corpus shows that our approach boosts the effectiveness of the existing composite kernels in relation extraction. It also confirms that by integrating the three important features (entity features, syntactic structures and contextual lexical features), we can improve the performance of a relation extraction process.

Automated Scoring of Scientific Argumentation Using Expert Morpheme Classification Approaches (전문가의 형태소 분류를 활용한 과학 논증 자동 채점)

  • Lee, Manhyoung;Ryu, Suna
    • Journal of The Korean Association For Science Education
    • /
    • v.40 no.3
    • /
    • pp.321-336
    • /
    • 2020
  • We explore automated scoring models of scientific argumentation. We consider how a new analytical approach using a machine learning technique may enhance the understanding of spoken argumentation in the classroom. We sampled 2,605 utterances that occurred during a high school student's science class on molecular structure and classified the utterances into five argumentative elements. Next, we performed Text Preprocessing for the classified utterances. As machine learning techniques, we applied support vector machines, decision tree, random forest, and artificial neural network. For enhancing the identification of rebuttal elements, we used a heuristic feature-engineering method that applies experts' classification of morphemes of scientific argumentation.