• Title/Summary/Keyword: word frequency matrix

Search Result 21, Processing Time 0.023 seconds

English Bible Text Visualization Using Word Clouds and Dynamic Graphics Technology (단어 구름과 동적 그래픽스 기법을 이용한 영어성경 텍스트 시각화)

  • Jang, Dae-Heung
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.3
    • /
    • pp.373-386
    • /
    • 2014
  • A word cloud is a visualization of word frequency in a given text. The importance of each word is shown in font size or color. This plot is useful for quickly perceiving the most prominent words and for locating a word alphabetically to determine its relative prominence. With dynamic graphics, we can find the changing pattern of prominent words and their frequencies according to the changing selection of chapters in a given text. We can define the word frequency matrix. In this matrix, rows are chapters in text and columns are ranks corresponding to word frequency about the words in the text. We can draw the word frequency matrix plot with this matrix. Dynamic graphic can indicate the changing pattern of the word frequency matrix according to the changing selection of the range of ranks of words. We execute an English Bible text visualization using word clouds and dynamic graphics technology.

Latent Semantic Analysis Approach for Document Summarization Based on Word Embeddings

  • Al-Sabahi, Kamal;Zuping, Zhang;Kang, Yang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.1
    • /
    • pp.254-276
    • /
    • 2019
  • Since the amount of information on the internet is growing rapidly, it is not easy for a user to find relevant information for his/her query. To tackle this issue, the researchers are paying much attention to Document Summarization. The key point in any successful document summarizer is a good document representation. The traditional approaches based on word overlapping mostly fail to produce that kind of representation. Word embedding has shown good performance allowing words to match on a semantic level. Naively concatenating word embeddings makes common words dominant which in turn diminish the representation quality. In this paper, we employ word embeddings to improve the weighting schemes for calculating the Latent Semantic Analysis input matrix. Two embedding-based weighting schemes are proposed and then combined to calculate the values of this matrix. They are modified versions of the augment weight and the entropy frequency that combine the strength of traditional weighting schemes and word embedding. The proposed approach is evaluated on three English datasets, DUC 2002, DUC 2004 and Multilingual 2015 Single-document Summarization. Experimental results on the three datasets show that the proposed model achieved competitive performance compared to the state-of-the-art leading to a conclusion that it provides a better document representation and a better document summary as a result.

Research on Designing Korean Emotional Dictionary using Intelligent Natural Language Crawling System in SNS (SNS대상의 지능형 자연어 수집, 처리 시스템 구현을 통한 한국형 감성사전 구축에 관한 연구)

  • Lee, Jong-Hwa
    • The Journal of Information Systems
    • /
    • v.29 no.3
    • /
    • pp.237-251
    • /
    • 2020
  • Purpose The research was studied the hierarchical Hangul emotion index by organizing all the emotions which SNS users are thinking. As a preliminary study by the researcher, the English-based Plutchick (1980)'s emotional standard was reinterpreted in Korean, and a hashtag with implicit meaning on SNS was studied. To build a multidimensional emotion dictionary and classify three-dimensional emotions, an emotion seed was selected for the composition of seven emotion sets, and an emotion word dictionary was constructed by collecting SNS hashtags derived from each emotion seed. We also want to explore the priority of each Hangul emotion index. Design/methodology/approach In the process of transforming the matrix through the vector process of words constituting the sentence, weights were extracted using TF-IDF (Term Frequency Inverse Document Frequency), and the dimension reduction technique of the matrix in the emotion set was NMF (Nonnegative Matrix Factorization) algorithm. The emotional dimension was solved by using the characteristic value of the emotional word. The cosine distance algorithm was used to measure the distance between vectors by measuring the similarity of emotion words in the emotion set. Findings Customer needs analysis is a force to read changes in emotions, and Korean emotion word research is the customer's needs. In addition, the ranking of the emotion words within the emotion set will be a special criterion for reading the depth of the emotion. The sentiment index study of this research believes that by providing companies with effective information for emotional marketing, new business opportunities will be expanded and valued. In addition, if the emotion dictionary is eventually connected to the emotional DNA of the product, it will be possible to define the "emotional DNA", which is a set of emotions that the product should have.

A Study on Multi-frequency Keyword Visualization based on Co-occurrence (다중빈도 키워드 가시화에 관한 연구)

  • Lee, HyunChang;Shin, SeongYoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.103-104
    • /
    • 2018
  • Recently, interest in data analysis has increased as the importance of big data becomes more important. Particularly, as social media data and academic research communities become more active and important, analysis becomes more important. In this study, co-word analysis was conducted through altmetrics articles collected from 2012 to 2017. In this way, the co-occurrence network map is derived from the keyword and the emphasized keyword is extracted.

  • PDF

A Study on Multi-frequency Keyword Visualization based on Co-occurrence (다중빈도 키워드 가시화에 관한 연구)

  • Lee, HyunChang;Shin, SeongYoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.424-425
    • /
    • 2018
  • Recently, interest in data analysis has increased as the importance of big data becomes more important. Particularly, as social media data and academic research communities become more active and important, analysis becomes more important. In this study, co-word analysis was conducted through altmetrics articles collected from 2012 to 2017. In this way, the co-occurrence network map is derived from the keyword and the emphasized keyword is extracted.

  • PDF

Comparison of error characteristics of final consonant at word-medial position between children with functional articulation disorder and normal children (기능적 조음장애아동과 일반아동의 어중자음 연쇄조건에서 나타나는 어중종성 오류 특성 비교)

  • Lee, Ran;Lee, Eunju
    • Phonetics and Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.19-28
    • /
    • 2015
  • This study investigated final consonant error characteristics at word-medial position in children with functional articulation disorder. Data was collected from 11 children with functional articulation and 11 normal children, ages 4 to 5. The speech samples were collected from a naming test. Seventy-five words with every possible bi-consonants matrix at the word-medial position were used. The results of this study were as follows : First, percentage of correct word-medial final consonants of functional articulation disorder was lower than normal children. Second, there were significant differences between two groups in omission, substitution and assimilation error. Children with functional articulation disorder showed a high frequency of omission and regressive assimilation error, especially alveolarization in regressive assimilation error most. However, normal children showed a high frequency of regressive assimilation error, especially bilabialization in regressive assimilation error most. Finally, the results of error analysis according to articulation manner, articulation place and phonation type of consonants of initial consonant at word-medial, both functional articulation disorder and normal children showed a high error rate in stop sound-stop sound condition. The error rate of final consonant at word-medial position was high when initial consonant at word-medial position was alveolar sound and alveopalatal sound. Futhermore, when initial sounds were fortis and aspirated sounds, more errors occurred than linis sound was initial sound. The results of this study provided practical error characteristics of final consonant at word-medial position in children with speech sound disorder.

Empirical Comparison of Word Similarity Measures Based on Co-Occurrence, Context, and a Vector Space Model

  • Kadowaki, Natsuki;Kishida, Kazuaki
    • Journal of Information Science Theory and Practice
    • /
    • v.8 no.2
    • /
    • pp.6-17
    • /
    • 2020
  • Word similarity is often measured to enhance system performance in the information retrieval field and other related areas. This paper reports on an experimental comparison of values for word similarity measures that were computed based on 50 intentionally selected words from a Reuters corpus. There were three targets, including (1) co-occurrence-based similarity measures (for which a co-occurrence frequency is counted as the number of documents or sentences), (2) context-based distributional similarity measures obtained from a latent Dirichlet allocation (LDA), nonnegative matrix factorization (NMF), and Word2Vec algorithm, and (3) similarity measures computed from the tf-idf weights of each word according to a vector space model (VSM). Here, a Pearson correlation coefficient for a pair of VSM-based similarity measures and co-occurrence-based similarity measures according to the number of documents was highest. Group-average agglomerative hierarchical clustering was also applied to similarity matrices computed by individual measures. An evaluation of the cluster sets according to an answer set revealed that VSM- and LDA-based similarity measures performed best.

Perceptual discrimination of wh-scopes in Gyeongsang Korean (경상 방언 의문문 작용역의 지각 구분)

  • Yun, Weonhee
    • Phonetics and Speech Sciences
    • /
    • v.14 no.2
    • /
    • pp.1-10
    • /
    • 2022
  • A wh-phrase positioned in an embedded clause can be interpreted as having a matrix scope if the sentence is produced with proper prosodic structures such as the wh-intonation. In a previous experiment, a sentence with a wh-phrase in an embedded clause was given to 40 speakers of Gyeongsang Korean. A script containing the sentence was provided to induce a matrix scope interpretation for the wh-phrase. These 40 utterances were prepared as stimuli for a perception test to verify whether the wh-phrases in the stimuli were perceived as having matrix scopes. Each utterance was played thrice to 24 subjects. The results showed that more than half of the 72 responses indicated a preference for an embedded scope rather than a matrix scope in 20 of the utterances. A multiple linear regression analysis showed that the matrix scope responses were best predicted by the magnitude of the pitch prominence in a prosodic word consisting of an embedded verb and a complementizer. The pitch prominence was calculated by subtracting the fundamental frequency (F0) at the right edge of the prosodic word from the peak F0 in the same prosodic word. The smaller the magnitude, the more matrix responses there were. These results suggest that the categorical perception of wh-scopes is based on the magnitude of pitch prominence.

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.

Representation of ambiguous word in Latent Semantic Analysis (LSA모형에서 다의어 의미의 표상)

  • 이태헌;김청택
    • Korean Journal of Cognitive Science
    • /
    • v.15 no.2
    • /
    • pp.23-31
    • /
    • 2004
  • Latent Semantic Analysis (LSA Landauer & Dumais, 1997) is a technique to represent the meanings of words using co-occurrence information of words appearing in he same context, which is usually a sentence or a document. In LSA, a word is represented as a point in multidimensional space where each axis represents a context, and a word's meaning is determined by its frequency in each context. The space is reduced by singular value decomposition (SVD). The present study elaborates upon LSA for use of representation of ambiguous words. The proposed LSA applies rotation of axes in the document space which makes possible to interpret the meaning of un. A simulation study was conducted to illustrate the performance of LSA in representation of ambiguous words. In the simulation, first, the texts which contain an ambiguous word were extracted and LSA with rotation was performed. By comparing loading matrix, we categorized the texts according to meanings. The first meaning of an ambiguous wold was represented by LSA with the matrix excluding the vectors for the other meaning. The other meanings were also represented in the same way. The simulation showed that this way of representation of an ambiguous word can identify the meanings of the word. This result suggest that LSA with axis rotation can be applied to representation of ambiguous words. We discussed that the use of rotation makes it possible to represent multiple meanings of ambiguous words, and this technique can be applied in the area of web searching.

  • PDF