• Title/Summary/Keyword: Word Depth

Search Result 75, Processing Time 0.03 seconds

Semantic Similarity Measures Between Words within a Document using WordNet (워드넷을 이용한 문서내에서 단어 사이의 의미적 유사도 측정)

  • Kang, SeokHoon;Park, JongMin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.11
    • /
    • pp.7718-7728
    • /
    • 2015
  • Semantic similarity between words can be applied in many fields including computational linguistics, artificial intelligence, and information retrieval. In this paper, we present weighted method for measuring a semantic similarity between words in a document. This method uses edge distance and depth of WordNet. The method calculates a semantic similarity between words on the basis of document information. Document information uses word term frequencies(TF) and word concept frequencies(CF). Each word weight value is calculated by TF and CF in the document. The method includes the edge distance between words, the depth of subsumer, and the word weight in the document. We compared out scheme with the other method by experiments. As the result, the proposed method outperforms other similarity measures. In the document, the word weight value is calculated by the proposed method. Other methods which based simple shortest distance or depth had difficult to represent the information or merge informations. This paper considered shortest distance, depth and information of words in the document, and also improved the performance.

A Comparative Analysis of the Word Depth Appearing in Representations Used in the Definitions of Mathematical Terms and Word Problem in Elementary School Mathematics Textbook (초등 수학 교과서의 수학 용어 정의 및 문장제에 사용된 표현의 문장 복잡성 비교 분석)

  • Kang, Yunji;Paik, Suckyoon
    • Journal of Elementary Mathematics Education in Korea
    • /
    • v.24 no.2
    • /
    • pp.231-257
    • /
    • 2020
  • As the main mathematical concepts are presented and expressed in various ways through textbooks during the teaching and learning process, it is necessary to look at the representations used in elementary math textbooks to find effective guidance. This study analyzed sentences used in the definition of mathematical terms and unit assessments of current elementary mathematics textbooks according to word depth (Yngve, 1960) from a syntactic perspective. As a result of the analysis, it could be seen that the sentences in textbook were generally concise, the word depth was lower, and the sentence structure and form were different depending on the individual characteristics of each term. Also, the sentences in the lower grade textbooks were more easily constructed, and the sentences of the term definition were more complex than the sentences of the unit assessments. Efforts should be made to help learners learn mathematical concepts, such as clarifying sentences in textbooks, presenting visual materials together, and providing additional explanations to suit the level of individual learners.

Association Modeling on Keyword and Abstract Data in Korean Port Research

  • Yoon, Hee-Young;Kwak, Il-Youp
    • Journal of Korea Trade
    • /
    • v.24 no.5
    • /
    • pp.71-86
    • /
    • 2020
  • Purpose - This study investigates research trends by searching for English keywords and abstracts in 1,511 Korean journal articles in the Korea Citation Index from the 2002-2019 period using the term "Port." The study aims to lay the foundation for a more balanced development of port research. Design/methodology - Using abstract and keyword data, we perform frequency analysis and word embedding (Word2vec). A t-SNE plot shows the main keywords extracted using the TextRank algorithm. To analyze which words were used in what context in our two nine-year subperiods (2002-2010 and 2010-2019), we use Scattertext and scaled F-scores. Findings - First, during the 18-year study period, port research has developed through the convergence of diverse academic fields, covering 102 subject areas and 219 journals. Second, our frequency analysis of 4,431 keywords in 1,511 papers shows that the words "Port" (60 times), "Port Competitiveness" (33 times), and "Port Authority" (29 times), among others, are attractive to most researchers. Third, a word embedding analysis identifies the words highly correlated with the top eight keywords and visually shows four different subject clusters in a t-SNE plot. Fourth, we use Scattertext to compare words used in the two research sub-periods. Originality/value - This study is the first to apply abstract and keyword analysis and various text mining techniques to Korean journal articles in port research and thus has important implications. Further in-depth studies should collect a greater variety of textual data and analyze and compare port studies from different countries.

A Study on Development of Measurement Tools for Word-of-Mouth Constraint Factors - Focusing on SNS Advertising - (구전 제약요인 측정도구 개발에 대한 연구 - SNS 광고를 중심으로 -)

  • Yun, Dae-Hong
    • Management & Information Systems Review
    • /
    • v.38 no.2
    • /
    • pp.209-223
    • /
    • 2019
  • The purpose of this study was to stimulate the online word-of-mouth advertising by developing the concept of word-of-mouth constraint factors and measurement tools in connection with the SNS advertising on social networks. To achieve the objective of this study, this study was conducted in 3 phases. First, the exploratory investigation(target group interview, in-depth interview, and expert interview) was performed to determine the concept and scope of the word-of-mouth constraint based on literature study and qualitative investigation method. Second, the reliability and validity of the measurement questions were verified through the survey in order to refine the developed measurement items. Third, the predictive validity of measurement items was verified by examining the relationship with other major construct concept for which the developed measurement items were different. Based on the results of study, 6 components and a total of 23 measurement questions for those components were derived. Each was called intrapersonal and interpersonal constraint(psychological sensitivity, compensatory sensitivity, and other person assessment), structural constraint(reliability, informativity, and entertainment). We developed the measurement questions related to word-of-mouth constraint based on qualitative study and quantitative study and holistically examined the social and psychological, environmental interruption factors acting as the word-of-mouth constraint factors for SNS advertising in terms of SNS achievements and evaluation from the perspective of word-of-mouth constraint. The results will lead to creation of basic framework for systematic and empirical research on the online word-of-mouth constraint and to achievement of effective SNS word-of-mouth advertising.

Semantic Image Retrieval Using Color Distribution and Similarity Measurement in WordNet (컬러 분포와 WordNet상의 유사도 측정을 이용한 의미적 이미지 검색)

  • Choi, Jun-Ho;Cho, Mi-Young;Kim, Pan-Koo
    • The KIPS Transactions:PartB
    • /
    • v.11B no.4
    • /
    • pp.509-516
    • /
    • 2004
  • Semantic interpretation of image is incomplete without some mechanism for understanding semantic content that is not directly visible. For this reason, human assisted content-annotation through natural language is an attachment of textual description to image. However, keyword-based retrieval is in the level of syntactic pattern matching. In other words, dissimilarity computation among terms is usually done by using string matching not concept matching. In this paper, we propose a method for computerized semantic similarity calculation In WordNet space. We consider the edge, depth, link type and density as well as existence of common ancestors. Also, we have introduced method that applied similarity measurement on semantic image retrieval. To combine wi#h the low level features, we use the spatial color distribution model. When tested on a image set of Microsoft's 'Design Gallery Line', proposed method outperforms other approach.

Deep Learning Application for Core Image Analysis of the Poems by Ki Hyung-Do (딥러닝을 이용한 기형도 시의 핵심 이미지 분석)

  • Ko, Kwang-Ho
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.3
    • /
    • pp.591-598
    • /
    • 2021
  • It's possible to get the word-vector by the statistical SVD or deep-learning CBOW and LSTM methods and theses ones learn the contexts of forward/backward words or the sequence of following words. It's used to analyze the poems by Ki Hyung-do with similar words recommended by the word-vector showing the core images of the poetry. It seems at first sight that the words don't go well with the images but they express the similar style described by the reference words once you look close the contexts of the specific poems. The word-vector can analogize the words having the same relations with the ones between the representative words for the core images of the poems. Therefore you can analyze the poems in depth and in variety with the similarity and analogy operations by the word-vector estimated with the statistical SVD or deep-learning CBOW and LSTM methods.

Effect of Online Word of Mouth on Product Sales: Focusing on Communication-Channel Characteristics

  • Jeon, Jaihyun;Lim, Taewook;Kim, Byung-Do;Seok, Junhee
    • Asia Marketing Journal
    • /
    • v.21 no.2
    • /
    • pp.73-98
    • /
    • 2019
  • As information and communication technology continue its remarkable development, the exchange of information online becomes as prevalent and frequent as face-to-face communication in daily life. Therefore, the management and application of WOM (word of mouth) practices will become more important than ever to companies. Currently, there are various types of communication channels for online WOM, and each channel has its own unique traits. Most of the previous research studies online WOM by examining the information inside a single communication channel, but this research chooses two different communication channels and analyzes the effects of online WOM with each channel's unique characteristics. More specifically, this research focuses on the expectation that the effects of information from Twitter and blogs on product sales may differ because Twitter and blogs, two different communication channels for online WOM, have their own unique traits. Our particular aim is to perform an in-depth examination on the effects of communication channel's volume and valence on product sales, two important attributes of online WOM. Furthermore, while most of the empirical research focuses on online WOM and analyzes its effect on markets of temporary experience goods, such as movies and books, this research highlights focuses on the automobile market, a durable goods market. The results of our analysis are as follows: First, regarding blogs, a positive valence significantly and positively affects the sales of products, and this result indicates that consumers are influenced more by the emotional aspect of a product presented in a post than by the number of blog posts. Second, regarding Twitter, the volume of online WOM significantly and positively affects sales, an indication that as the number of posts increase, the sales increase. Through this research, we suggest that even those firms that sell durable goods can increase sales through the management and application of online WOM. Moreover, according to the characteristics of communication channels, the effects of online WOM on sales differ. As a practical implication of this research, we suggest that companies can and should create marketing strategies appropriate to their targeted communication channels.

A Study on the Optimal Search Keyword Extraction and Retrieval Technique Generation Using Word Embedding (워드 임베딩(Word Embedding)을 활용한 최적의 키워드 추출 및 검색 방법 연구)

  • Jeong-In Lee;Jin-Hee Ahn;Kyung-Taek Koh;YoungSeok Kim
    • Journal of the Korean Geosynthetics Society
    • /
    • v.22 no.2
    • /
    • pp.47-54
    • /
    • 2023
  • In this paper, we propose the technique of optimal search keyword extraction and retrieval for news article classification. The proposed technique was verified as an example of identifying trends related to North Korean construction. A representative Korean media platform, BigKinds, was used to select sample articles and extract keywords. The extracted keywords were vectorized using word embedding and based on this, the similarity between the extracted keywords was examined through cosine similarity. In addition, words with a similarity of 0.5 or higher were clustered based on the top 10 frequencies. Each cluster was formed as 'OR' between keywords inside the cluster and 'AND' between clusters according to the search form of the BigKinds. As a result of the in-depth analysis, it was confirmed that meaningful articles appropriate for the original purpose were extracted. This paper is significant in that it is possible to classify news articles suitable for the user's specific purpose without modifying the existing classification system and search form.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.