• Title/Summary/Keyword: Cluster Topic Words

Search Result 20, Processing Time 0.022 seconds

Effects of Korean Syllable Structure on English Pronunciation

  • Lee, Mi-Hyun;Ryu, Hee-Kwan
    • Proceedings of the KSPS conference
    • /
    • 2000.07a
    • /
    • pp.364-364
    • /
    • 2000
  • It has been widely discussed in phonology that syllable structure of mother tongue influences one's acquisition of foreign language. However, the topic was hardly examined experimentally. So, we investigated effects of Korean syllable structure when Korean speakers pronounce English words, especially focusing on consonant strings that are not allowed in Korean. In the experiment, all the subjects are divided into 3 groups, that is, native, experienced, and inexperienced speakers. Native group consists of 1 male English native speaker. Experienced and inexperienced are each composed of 3 male Korean speakers. These 2 groups are divided by the length of residence in the country using English as a native language. 41 mono-syllable words are prepared considering the position (onset vs. coda), characteristic (stops, affricates, fricatives), and number of consonant. Then, the length of the consonant cluster is measured. To eliminate tempo effect, the measured length is normalized using the length of the word 'say' in the carrier sentence. Measurement of consonant cluster is the relative time period between the initiation of energy (onset I coda) which is acoustically representative of noise (consonant portion) and voicing. bar (vowel portion) in a syllable. Statistical method is used to estimate the differences among 3 groups. For each word, analysis of variance (ANDY A) and Post Hoc tests are carried out.

  • PDF

Comparative Analysis of Consumer Needs for Products, Service, and Integrated Product Service : Focusing on Amazon Online Reviews (제품, 서비스, 융합제품서비스의 소비자 니즈 비교 분석 :아마존 온라인 리뷰를 중심으로)

  • Kim, Sungbum
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.7
    • /
    • pp.316-330
    • /
    • 2020
  • The study analyzes reviews of hardware products, customer service products, and products that take the form of a convergence of hardware and cloud services in ICT using text mining. We derive keywords of each review and find the differentiation of words that are used to derive topics. A cluster analysis is performed to categorize reviews into their respective clusters. Through this study, we observed which keywords are most often used for each product type and found topics that express the characteristics of products and services using topic modeling. We derived keywords such as "professional" and "technician" which are topics that suggest the excellence of the service provider in the review of service products. Further, we identified adjectives with positive connotations such as "favorite", "fine", "fun", "nice", "smart", "unlimited", and "useful" from Amazon Eco review, an integrated product and service. Using the cluster analysis, the entire review was clustered into three groups, and three product type reviews exclusively resulted in belonging to each different cluster. The study analyzed the differences whereby consumer needs are expressed differently in reviews depending on the type of product and suggested that it is necessary to differentiate product planning and marketing promotion according to the product type in practice.

Application of Urban Computing to Explore Living Environment Characteristics in Seoul : Integration of S-Dot Sensor and Urban Data

  • Daehwan Kim;Woomin Nam;Keon Chul Park
    • Journal of Internet Computing and Services
    • /
    • v.24 no.4
    • /
    • pp.65-76
    • /
    • 2023
  • This paper identifies the aspects of living environment elements (PM2.5, PM10, Noise) throughout Seoul and the urban characteristics that affect them by utilizing the big data of the S-Dot sensors in Seoul, which has recently become a hot topic. In other words, it proposes a big data based urban computing research methodology and research direction to confirm the relationship between urban characteristics and living environments that directly affect citizens. The temporal range is from 2020 to 2021, which is the available range of time series data for S-Dot sensors, and the spatial range is throughout Seoul by 500mX500m GRID. First of all, as part of analyzing specific living environment patterns, simple trends through EDA are identified, and cluster analysis is conducted based on the trends. After that, in order to derive specific urban planning factors of each cluster, basic statistical analysis such as ANOVA, OLS and MNL analysis were conducted to confirm more specific characteristics. As a result of this study, cluster patterns of environment elements(PM2.5, PM10, Noise) and urban factors that affect them are identified, and there are areas with relatively high or low long-term living environment values compared to other regions. The results of this study are believed to be a reference for urban planning management measures for vulnerable areas of living environment, and it is expected to be an exploratory study that can provide directions to urban computing field, especially related to environmental data in the future.

Empirical Comparison of Word Similarity Measures Based on Co-Occurrence, Context, and a Vector Space Model

  • Kadowaki, Natsuki;Kishida, Kazuaki
    • Journal of Information Science Theory and Practice
    • /
    • v.8 no.2
    • /
    • pp.6-17
    • /
    • 2020
  • Word similarity is often measured to enhance system performance in the information retrieval field and other related areas. This paper reports on an experimental comparison of values for word similarity measures that were computed based on 50 intentionally selected words from a Reuters corpus. There were three targets, including (1) co-occurrence-based similarity measures (for which a co-occurrence frequency is counted as the number of documents or sentences), (2) context-based distributional similarity measures obtained from a latent Dirichlet allocation (LDA), nonnegative matrix factorization (NMF), and Word2Vec algorithm, and (3) similarity measures computed from the tf-idf weights of each word according to a vector space model (VSM). Here, a Pearson correlation coefficient for a pair of VSM-based similarity measures and co-occurrence-based similarity measures according to the number of documents was highest. Group-average agglomerative hierarchical clustering was also applied to similarity matrices computed by individual measures. An evaluation of the cluster sets according to an answer set revealed that VSM- and LDA-based similarity measures performed best.

Research Trends of Studies Related to the Nature of Science in Korea Using Semantic Network Analysis (언어 네트워크 분석을 이용한 과학의 본성에 관한 국내연구 동향)

  • Lee, Sang-Gyun
    • Journal of the Korean Society of Earth Science Education
    • /
    • v.9 no.1
    • /
    • pp.65-87
    • /
    • 2016
  • The purpose of this study is to examine Korean journals related to science education in order to analyze research trends into Nature of science in Korea. The subject of the study is the level of Korean Citation Index (KCI-listed, KCI listing candidates), that can be searched by the key phrase, "Nature of science" in Korean language through the RISS service. In this study, the Descriptive Statistical Analysis Method is utilized to discover the number of research articles, classifying them by year and by journal. Also, the Sementic Network Analysis was conducted to Word Cloud Analysis the frequency of key words, Centrality Analysis, co-occurrence and Cluster Dendrogram Analysis throughout a variety of research articles. The results show that 91 research papers were published in 25 journals from 1991 to 2015. Specifically, the 2 major journals published more than 50% of the total papers. In relation to research fields., In addition, key phrases, such as 'Analysis', 'recognition', 'lessons', 'science textbook', 'History of Science' and 'influence' are the most frequently used among the research studies. Finally, there are small language networks that appear concurrently as below: [Nature of science - high school student - recognize], [Explicit - lesson - effect], [elementary school - science textbook - analysis]. Research topic have been gradually diversified. However, many studies still put their focus on analysis and research aspects, and there have been little research on the Teaching and learning methods.

Unsupervised Motion Learning for Abnormal Behavior Detection in Visual Surveillance (영상감시시스템에서 움직임의 비교사학습을 통한 비정상행동탐지)

  • Jeong, Ha-Wook;Chang, Hyung-Jin;Choi, Jin-Young
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.5
    • /
    • pp.45-51
    • /
    • 2011
  • In this paper, we propose an unsupervised learning method for modeling motion trajectory patterns effectively. In our approach, observations of an object on a trajectory are treated as words in a document for latent dirichlet allocation algorithm which is used for clustering words on the topic in natural language process. This allows clustering topics (e.g. go straight, turn left, turn right) effectively in complex scenes, such as crossroads. After this procedure, we learn patterns of word sequences in each cluster using Baum-Welch algorithm used to find the unknown parameters in a hidden markov model. Evaluation of abnormality can be done using forward algorithm by comparing learned sequence and input sequence. Results of experiments show that modeling of semantic region is robust against noise in various scene.

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.

A Study on the Deduction of Social Issues Applying Word Embedding: With an Empasis on News Articles related to the Disables (단어 임베딩(Word Embedding) 기법을 적용한 키워드 중심의 사회적 이슈 도출 연구: 장애인 관련 뉴스 기사를 중심으로)

  • Choi, Garam;Choi, Sung-Pil
    • Journal of the Korean Society for information Management
    • /
    • v.35 no.1
    • /
    • pp.231-250
    • /
    • 2018
  • In this paper, we propose a new methodology for extracting and formalizing subjective topics at a specific time using a set of keywords extracted automatically from online news articles. To do this, we first extracted a set of keywords by applying TF-IDF methods selected by a series of comparative experiments on various statistical weighting schemes that can measure the importance of individual words in a large set of texts. In order to effectively calculate the semantic relation between extracted keywords, a set of word embedding vectors was constructed by using about 1,000,000 news articles collected separately. Individual keywords extracted were quantified in the form of numerical vectors and clustered by K-means algorithm. As a result of qualitative in-depth analysis of each keyword cluster finally obtained, we witnessed that most of the clusters were evaluated as appropriate topics with sufficient semantic concentration for us to easily assign labels to them.

A Comparative Study using Bibliometric Analysis Method on the Reformed Theology and Evangelicalism (개혁신학과 복음주의에 관한 계량서지학적 비교 연구)

  • Yoo, Yeong Jun;Lee, Jae Yun
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.29 no.3
    • /
    • pp.41-63
    • /
    • 2018
  • This study aimed at analyzing journals and index terms, authors of the reformed theology and evangelicalism, neutral theological position by using bibliometrical analyzing methods. The analyzing methods are average linkage and neighbor centralities, profile cosine similarities. Especially, when analyzing the relationship between authors, we interpreted the research topic by finding the key shared index terms between the authors. In the journal analysis results, 9 journals were largely clustered together in the two clusters of the reformed theology and evangelicalism, but Presbyterian Theological Quarterly that is thought to be a reformed journal was clustered in evangelical cluster. In the index terms analysis results of the clusters, the reformed theology and evangelicalism were key words representing the two clusters. In the authors' analysis results, we had 9 clusters and the Presbyterian theologian studying the reformed theology had the four clusters and the non-Presbyterian theologian had the 5 clusters. Therefore, we consistently had the two clusters of the reformed theology and evangelicalism in all the analysis of the journals and the index terms, the authors.

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.