• Title/Summary/Keyword: co-occurrence words

Search Result 74, Processing Time 0.024 seconds

A Method for Detection and Correction of Pseudo-Semantic Errors Due to Typographical Errors (철자오류에 기인한 가의미 오류의 검출 및 교정 방법)

  • Kim, Dong-Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.10
    • /
    • pp.173-182
    • /
    • 2013
  • Typographical mistakes made in the writing process of drafts of electronic documents are more common than any other type of errors. The majority of these errors caused by mistyping are regarded as consequently still typo-errors, but a considerable number of them are developed into the grammatical errors and the semantic errors. Pseudo semantic errors among these errors due to typographical errors have more noticeable peculiarities than pure semantic errors between senses of surrounding context words within a sentence. These semantic errors can be detected and corrected by simple algorithm based on the co-occurrence frequency because of their prominent contextual discrepancy. I propose a method for detection and correction based on the co-occurrence frequency in order to detect semantic errors due to typo-errors. The co-occurrence frequency in proposed method is counted for only words with immediate dependency relation, and the cosine similarity measure is used in order to detect pseudo semantic errors. From the presented experimental results, the proposed method is expected to help improve the detecting rate of overall proofreading system by about 2~3%.

The Tresnds of Artiodactyla Researches in Korea, China and Japan using Text-mining and Co-occurrence Analysis of Words (텍스트마이닝과 동시출현단어분석을 이용한 한국, 중국, 일본의 우제목 연구 동향 분석)

  • Lee, Byeong-Ju;Kim, Baek-Jun;Lee, Jae Min;Eo, Soo Hyung
    • Korean Journal of Environment and Ecology
    • /
    • v.33 no.1
    • /
    • pp.9-15
    • /
    • 2019
  • Artiodactyla, which is an even-toed mammal, widely inhabits worldwide. In recent years, wild Artiodactyla species have attracted public attention due to the rapid increase of crop damage and road-kill caused by wild Artiodactyla such as water deer and wild boar and the decrease of some species such as long-tailed goral and musk deer. In spite of such public attention, however, there have been few studies on Artiodactyla in Korea, and no studies have focused on the trend analysis of Artiodactyla, making it difficult to understand actual problems. Many recent studies on trend used text-mining and co-occurrence analysis to increase objectivity in the classification of research subjects by extracting keywords appearing in literature and quantifying relevance between words. In this study, we analyzed texts from research articles of three countries (Korea, China, and Japan) through text-mining and co-occurrence analysis and compared the research subjects in each country. We extracted 199 words from 665 articles related to Artiodactyla of three countries through text-mining. Three word-clusters were formed as a result of co-occurrence analysis on extracted words. We determined that cluster1 was related to "habitat condition and ecology", cluster2 was related to "disease" and cluster3 was related to "conservation genetics and molecular ecology". The results of comparing the rates of occurrence of each word clusters in each country showed that they were relatively even in China and Japan whereas Korea had a prevailing rate (69%) of cluster2 related to "disease". In the regression analysis on the number of words per year in each cluster, the number of words in both China and Japan increased evenly by year in each cluster while the rate of increase of cluster2 was five times more than the other clusters in Korea. The results indicate that Korean researches on Artiodactyla tended to focus on diseases more than those in China and Japan, and few researchers considered other subjects including habitat characteristics, behavior and molecular ecology. In order to control the damage caused by Artiodactyla and to establish a reasonable policy for the protection of endangered species, it is necessary to accumulate basic ecological data by conducting researches on wild Artiodactyla more.

The Analysis of Knowledge Structure using Co-word Method in Quality Management Field (동시단어분석을 이용한 품질경영분야 지식구조 분석)

  • Park, Man-Hee
    • Journal of Korean Society for Quality Management
    • /
    • v.44 no.2
    • /
    • pp.389-408
    • /
    • 2016
  • Purpose: This study was designed to analyze the behavioral change of knowledge structures and the trends of research topics in the quality management field. Methods: The network structure and knowledge structure of the words were visualized in map form using co-word analysis, cluster analysis and strategic diagram. Results: Summarizing the research results obtained in this study are as follows. First, the word network derived from co-occurrence matrix had 106 nodes and 5,314 links and its density was analyzed to 0.95. Average betweenness centrality of word network was 2.37. In addition, average closeness centrality and average eigenvector centrality of word network were 0.01. Second, by applying optimal criteria of cluster decision and K-means algorithm to word co-occurrence matrix, 106 words were grouped into seven clusters such as standard & efficiency, product design, reliability, control chart, quality model, 6 sigma, and service quality. Conclusion: According to the results of strategic diagram analysis over time, the traditional research topics of quality management field related to reliability, 6 sigma, control chart topics in the third quadrant were revealed to be declined for their study importance. Research topics related to product design and customer satisfaction were found to be an important research topic over analysis periods. Research topic related to management innovation was emerging state and the scope of research topics related to process model was extended to research topics with system performance. Research topic related to service quality located in the first quadrant was analyzed as the key research topic.

Text Categorization Using TextRank Algorithm (TextRank 알고리즘을 이용한 문서 범주화)

  • Bae, Won-Sik;Cha, Jeong-Won
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.1
    • /
    • pp.110-114
    • /
    • 2010
  • We describe a new method for text categorization using TextRank algorithm. Text categorization is a problem that over one pre-defined categories are assigned to a text document. TextRank algorithm is a graph-based ranking algorithm. If we consider that each word is a vertex, and co-occurrence of two adjacent words is a edge, we can get a graph from a document. After that, we find important words using TextRank algorithm from the graph and make feature which are pairs of words which are each important word and a word adjacent to the important word. We use classifiers: SVM, Na$\ddot{i}$ve Bayesian classifier, Maximum Entropy Model, and k-NN classifier. We use non-cross-posted version of 20 Newsgroups data set. In consequence, we had an improved performance in whole classifiers, and the result tells that is a possibility of TextRank algorithm in text categorization.

Bag of Visual Words Method based on PLSA and Chi-Square Model for Object Category

  • Zhao, Yongwei;Peng, Tianqiang;Li, Bicheng;Ke, Shengcai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.7
    • /
    • pp.2633-2648
    • /
    • 2015
  • The problem of visual words' synonymy and ambiguity always exist in the conventional bag of visual words (BoVW) model based object category methods. Besides, the noisy visual words, so-called "visual stop-words" will degrade the semantic resolution of visual dictionary. In view of this, a novel bag of visual words method based on PLSA and chi-square model for object category is proposed. Firstly, Probabilistic Latent Semantic Analysis (PLSA) is used to analyze the semantic co-occurrence probability of visual words, infer the latent semantic topics in images, and get the latent topic distributions induced by the words. Secondly, the KL divergence is adopt to measure the semantic distance between visual words, which can get semantically related homoionym. Then, adaptive soft-assignment strategy is combined to realize the soft mapping between SIFT features and some homoionym. Finally, the chi-square model is introduced to eliminate the "visual stop-words" and reconstruct the visual vocabulary histograms. Moreover, SVM (Support Vector Machine) is applied to accomplish object classification. Experimental results indicated that the synonymy and ambiguity problems of visual words can be overcome effectively. The distinguish ability of visual semantic resolution as well as the object classification performance are substantially boosted compared with the traditional methods.

Social Big Data-based Co-occurrence Analysis of the Main Person's Characteristics and the Issues in the 2016 Rio Olympics Men's Soccer Games (소셜 빅데이터 기반 2016리우올림픽 축구 관련 이슈 및 인물에 대한 연관단어 분석)

  • Park, SungGeon;Lee, Soowon;Hwang, YoungChan
    • 한국체육학회지인문사회과학편
    • /
    • v.56 no.2
    • /
    • pp.303-320
    • /
    • 2017
  • This paper seeks to better understand the focal issues and persons related to Rio Olympic soccer games through social data science and analytics. This study collected its data from online news articles and comments specific to KOR during the Olympic football games. In order to investigate the public interests for each game and target persons, this study performed the co-occurrence words analysis. Then after, the study applied the NodeXL software to perform its visualization of the results. Through this application and process, the study found several major issues during the Rio Olympic men's football game including the following: the match between KOR and PIJ, KOR player Heungmin Son, commentator Young-Pyo Lee, sportscaster Woo-Jong Jo. The study also showed the general public opinion expressed positive words towards the South Korean national football team during the Rio Olympics, though there existed negative words as well. Furthermore the study revealed positive attitude towards the commentators and casters. In conclusion, the way to increase the public's interest in big sporting events can be achieved by providing the following: contents that include various professional sports analysis, a capable domain expert with thorough preparation, a commentator and/or caster with artistic sense as well as well-spoken, explanatory power and so on. Multidisciplinary research combined with sports science, social science, information technology and media can contribute to a wide range of theoretical studies and practical developments within the sports industry.

Analysis of Reference Inquiries in the Field of Social Science in the Collaborative Reference Service Using the Co-Word Technique

  • Cho, Jane
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.49 no.1
    • /
    • pp.129-148
    • /
    • 2015
  • This study grasped the true nature of the inquiry domain by analysing the requests for collaborative reference service in the social science field using the co-word technique, and schematized the intellectual structure. First, this study extracted 748 uncontrolled keywords from inquiries for reference in the field of social science. Second, calculated similarity indices between the words on the basis of co-occurrence frequency, and performed not only clustering but also MDS mapping. Third, to grasp the difference in inquiries for reference by period, dividing the period into two parts, and performed comparative analysis. As a result, there formed 5 clusters and "Korea Education" showed an overwhelming size with 40.3% among those clusters. The result of the analysis through the period division showed there were many questions about "Education" during the first half, while a lot of inquiries with focus on "welfare and business information" during the second half.

The Study on the Software Educational Needs by Applying Text Content Analysis Method: The Case of the A University (텍스트 내용분석 방법을 적용한 소프트웨어 교육 요구조사 분석: A대학을 중심으로)

  • Park, Geum-Ju
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.3
    • /
    • pp.65-70
    • /
    • 2019
  • The purpose of this study is to understand the college students' needs for software curriculum which based on surveys from educational satisfaction of the software lecture evaluation, as well as to find out the improvement plan by applying the text content analysis method. The research method used the text content analysis program to calculate the frequency of words occurrence, key words selection, co-occurrence frequency of key words, and analyzed the text center and network analysis by using the network analysis program. As a result of this research, the decent points of the software education network are mentioned with 'lecturer' is the most frequently occurrence after then with 'kindness', 'student', 'explanation', 'coding'. The network analysis of the shortage points has been the most mention of 'lecture', 'wish to', 'student', 'lecturer', 'assignment', 'coding', 'difficult', and 'announcement' which are mentioned together. The comprehensive network analysis of both good and shortage points has compared among key words, we can figure out difference among the key words: for example, 'group activity or task', 'assignment', 'difficulty on level of lecture', and 'thinking about lecturer'. Also, from this difference, we can provide that the lack of proper role of individual staff at group activities, difficult and excessive tasks, awareness of the difficulty and necessity of software education, lack of instructor's teaching method and feedback. Therefore, it is necessary to examine not only how the grouping of software education (activities) and giving assignments (or tasks), but also how carried out group activities and tasks and monitored about the contents of lectures, teaching methods, the ratio of practice and design thinking.

Conceptual Extraction of Compound Korean Keywords

  • Lee, Samuel Sangkon
    • Journal of Information Processing Systems
    • /
    • v.16 no.2
    • /
    • pp.447-459
    • /
    • 2020
  • After reading a document, people construct a concept about the information they consumed and merge multiple words to set up keywords that represent the material. With that in mind, this study suggests a smarter and more efficient keyword extraction method wherein scholarly journals are used as the basis for the establishment of production rules based on a concept information of words appearing in a document in a way in which author-provided keywords are functional although they do not appear in the body of the document. This study presents a new way to determine the importance of each keyword, excluding non-relevant keywords. To identify the validity of extracted keywords, titles and abstracts of journals about natural language and auditory language were collected for analysis. The comparison of author-provided keywords with the keyword results of the developed system showed that the developed system was highly useful, with an accuracy rate as good as up to 96%.

A Dependency Graph-Based Keyphrase Extraction Method Using Anti-patterns

  • Batsuren, Khuyagbaatar;Batbaatar, Erdenebileg;Munkhdalai, Tsendsuren;Li, Meijing;Namsrai, Oyun-Erdene;Ryu, Keun Ho
    • Journal of Information Processing Systems
    • /
    • v.14 no.5
    • /
    • pp.1254-1271
    • /
    • 2018
  • Keyphrase extraction is one of fundamental natural language processing (NLP) tools to improve many text-mining applications such as document summarization and clustering. In this paper, we propose to use two novel techniques on the top of the state-of-the-art keyphrase extraction methods. First is the anti-patterns that aim to recognize non-keyphrase candidates. The state-of-the-art methods often used the rich feature set to identify keyphrases while those rich feature set cover only some of all keyphrases because keyphrases share very few similar patterns and stylistic features while non-keyphrase candidates often share many similar patterns and stylistic features. Second one is to use the dependency graph instead of the word co-occurrence graph that could not connect two words that are syntactically related and placed far from each other in a sentence while the dependency graph can do so. In experiments, we have compared the performances with different settings of the graphs (co-occurrence and dependency), and with the existing method results. Finally, we discovered that the combination method of dependency graph and anti-patterns outperform the state-of-the-art performances.