• Title/Summary/Keyword: Frequency based Text Analysis

Search Result 237, Processing Time 0.029 seconds

Sentence Similarity Analysis using Ontology Based on Cosine Similarity (코사인 유사도를 기반의 온톨로지를 이용한 문장유사도 분석)

  • Hwang, Chi-gon;Yoon, Chang-Pyo;Yun, Dai Yeol
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.441-443
    • /
    • 2021
  • Sentence or text similarity is a measure of the degree of similarity between two sentences. Techniques for measuring text similarity include Jacquard similarity, cosine similarity, Euclidean similarity, and Manhattan similarity. Currently, the cosine similarity technique is most often used, but since this is an analysis according to the occurrence or frequency of a word in a sentence, the analysis on the semantic relationship is insufficient. Therefore, we try to improve the efficiency of analysis on the similarity of sentences by giving relations between words using ontology and including semantic similarity when extracting words that are commonly included in two sentences.

  • PDF

Perceived Characteristics of Grains during the Choseon Dynasty - A Study Applying Text Frequency Analysis Using the Choseonwangjoshilrok Data - (조선왕조실록 텍스트 빈도 분석을 통한 조선시대 곡물에 관한 인식 특성 고찰)

  • Mi-Hye, Kim
    • Journal of the Korean Society of Food Culture
    • /
    • v.38 no.1
    • /
    • pp.26-37
    • /
    • 2023
  • This study applied the text frequency method to analyze the crops prevalent during the Chosunwangjoshilrok dynasty, and categorized the results by each king. Contemporary perception of grains was observed by examining the staple crop types. Staple species were examined using the word cloud and semantic network analysis. Totally, 101,842 types of crop consumption were recorded during the Chosunwangjoshilrok period. Of these, 51,337 (50.4%) were grains, 50,407 (49.5%) were beans, and 98 (0.1%) were seeds. Rice was the most frequently consumed grain (37.1%), followed by pii (11.9%), millet (11.3%), barley (4.5%), proso (0.8%), wheat (0.6%), buckwheat (0.1%), and adlay (0.05%). Grain chronological frequency in the Choseon dynasty was determined to be 15,520 cases in the 15th century (30.2%), 11,201 cases in the 18th century (21.8%), 9,421 cases in the 17th century (18.4%), 9,113 cases in the 16th century (17.8%), and 6,082 cases in the 19th century (11.8%). Interest in grain amongst the 27 kings of Choseon was evaluated based on the frequency of records. The 15th century King Sejong recorded the maximum interest with 13,363 cases (13.1%), followed by King Jungjo (8,501 cases in the 18th century; 8.4%), King Sungjong (7,776 cases in the 15th century; 7.6%).

A Performance Analysis Based on Hadoop Application's Characteristics in Cloud Computing (클라우드 컴퓨팅에서 Hadoop 애플리케이션 특성에 따른 성능 분석)

  • Keum, Tae-Hoon;Lee, Won-Joo;Jeon, Chang-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.5
    • /
    • pp.49-56
    • /
    • 2010
  • In this paper, we implement a Hadoop based cluster for cloud computing and evaluate the performance of this cluster based on application characteristics by executing RandomTextWriter, WordCount, and PI applications. A RandomTextWriter creates given amount of random words and stores them in the HDFS(Hadoop Distributed File System). A WordCount reads an input file and determines the frequency of a given word per block unit. PI application induces PI value using the Monte Carlo law. During simulation, we investigate the effect of data block size and the number of replications on the execution time of applications. Through simulation, we have confirmed that the execution time of RandomTextWriter was proportional to the number of replications. However, the execution time of WordCount and PI were not affected by the number of replications. Moreover, the execution time of WordCount was optimum when the block size was 64~256MB. Therefore, these results show that the performance of cloud computing system can be enhanced by using a scheduling scheme that considers application's characteristics.

Can Similarities in Medical thought be Quantified? - Focusing on Donguibogam, Uihagibmun and Gyeongagjeonseo - (의학 사상의 유사성은 계량 분석 될 수 있는가 - 『동의보감』과 『의학입문』, 『경악전서』를 중심으로 -)

  • Oh, Junho
    • Journal of Korean Medical classics
    • /
    • v.31 no.2
    • /
    • pp.71-82
    • /
    • 2018
  • Objectives : The purpose of this study is to compare the similarities among Donguibogam(DO), Uihagibmun(UI), and Gyeongagjeonseo(GY) in order to examine whether the medical thoughts embedded in the texts can be compared in a quantitative way. Methods : Under an empirical assumption that medical thoughts can be reduced to the frequency of major key words within the text, we selected the fourteen words of the four categories that are commonly used to describe physiology and pathology in Korean medicine as key words. And the frequency of these key words was measured and compared with each other in the three important medical texts in Korea. Results : As a result of quantitative analysis based on ${\chi}^2$ statistic, the key words in the books were distributed most heterogeneously in DO and distributed most homogeneously in UI. In comparison of the similarity analyzed by the same method, DO and UI were significantly more similar than those of DO and UI. The results of the word frequency pattern and the similarities of the book contents(CBDF) show that DO is influenced by UI, and the differences between standardized residuals and homogeneity tells us that internal context of both books are constructed differently. Conclusions : These results support the results of traditional research by experts. With the above, we were able to confirm that medical thoughts can be reduced to the frequency of major key words within the text, and compared through the frequency of such key words.

Feature-selection algorithm based on genetic algorithms using unstructured data for attack mail identification (공격 메일 식별을 위한 비정형 데이터를 사용한 유전자 알고리즘 기반의 특징선택 알고리즘)

  • Hong, Sung-Sam;Kim, Dong-Wook;Han, Myung-Mook
    • Journal of Internet Computing and Services
    • /
    • v.20 no.1
    • /
    • pp.1-10
    • /
    • 2019
  • Since big-data text mining extracts many features and data, clustering and classification can result in high computational complexity and low reliability of the analysis results. In particular, a term document matrix obtained through text mining represents term-document features, but produces a sparse matrix. We designed an advanced genetic algorithm (GA) to extract features in text mining for detection model. Term frequency inverse document frequency (TF-IDF) is used to reflect the document-term relationships in feature extraction. Through a repetitive process, a predetermined number of features are selected. And, we used the sparsity score to improve the performance of detection model. If a spam mail data set has the high sparsity, detection model have low performance and is difficult to search the optimization detection model. In addition, we find a low sparsity model that have also high TF-IDF score by using s(F) where the numerator in fitness function. We also verified its performance by applying the proposed algorithm to text classification. As a result, we have found that our algorithm shows higher performance (speed and accuracy) in attack mail classification.

A Study on the General Public's Perceptions of Dental Fear Using Unstructured Big Data

  • Han-A Cho;Bo-Young Park
    • Journal of dental hygiene science
    • /
    • v.23 no.4
    • /
    • pp.255-263
    • /
    • 2023
  • Background: This study used text mining techniques to determine public perceptions of dental fear, extracted keywords related to dental fear, identified the connection between the keywords, and categorized and visualized perceptions related to dental fear. Methods: Keywords in texts posted on Internet portal sites (NAVER and Google) between 1 January, 2000, and 31 December, 2022, were collected. The four stages of analysis were used to explore the keywords: frequency analysis, term frequency-inverse document frequency (TF-IDF), centrality analysis and co-occurrence analysis, and convergent correlations. Results: In the top ten keywords based on frequency analysis, the most frequently used keyword was 'treatment,' followed by 'fear,' 'dental implant,' 'conscious sedation,' 'pain,' 'dental fear,' 'comfort,' 'taking medication,' 'experience,' and 'tooth.' In the TF-IDF analysis, the top three keywords were dental implant, conscious sedation, and dental fear. The co-occurrence analysis was used to explore keywords that appear together and showed that 'fear and treatment' and 'treatment and pain' appeared the most frequently. Conclusion: Texts collected via unstructured big data were analyzed to identify general perceptions related to dental fear, and this study is valuable as a source data for understanding public perceptions of dental fear by grouping associated keywords. The results of this study will be helpful to understand dental fear and used as factors affecting oral health in the future.

The Study on the Software Educational Needs by Applying Text Content Analysis Method: The Case of the A University (텍스트 내용분석 방법을 적용한 소프트웨어 교육 요구조사 분석: A대학을 중심으로)

  • Park, Geum-Ju
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.3
    • /
    • pp.65-70
    • /
    • 2019
  • The purpose of this study is to understand the college students' needs for software curriculum which based on surveys from educational satisfaction of the software lecture evaluation, as well as to find out the improvement plan by applying the text content analysis method. The research method used the text content analysis program to calculate the frequency of words occurrence, key words selection, co-occurrence frequency of key words, and analyzed the text center and network analysis by using the network analysis program. As a result of this research, the decent points of the software education network are mentioned with 'lecturer' is the most frequently occurrence after then with 'kindness', 'student', 'explanation', 'coding'. The network analysis of the shortage points has been the most mention of 'lecture', 'wish to', 'student', 'lecturer', 'assignment', 'coding', 'difficult', and 'announcement' which are mentioned together. The comprehensive network analysis of both good and shortage points has compared among key words, we can figure out difference among the key words: for example, 'group activity or task', 'assignment', 'difficulty on level of lecture', and 'thinking about lecturer'. Also, from this difference, we can provide that the lack of proper role of individual staff at group activities, difficult and excessive tasks, awareness of the difficulty and necessity of software education, lack of instructor's teaching method and feedback. Therefore, it is necessary to examine not only how the grouping of software education (activities) and giving assignments (or tasks), but also how carried out group activities and tasks and monitored about the contents of lectures, teaching methods, the ratio of practice and design thinking.

Analysis of Symptoms-Herbs Relationships in Shanghanlun Using Text Mining Approach (텍스트마이닝 기법을 이용한 『상한론』 내의 증상-본초 조합의 탐색적 분석)

  • Jang, Dongyeop;Ha, Yoonsu;Lee, Choong-Yeol;Kim, Chang-Eop
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.34 no.4
    • /
    • pp.159-169
    • /
    • 2020
  • Shanghanlun (Treatise on Cold Damage Diseases) is the oldest document in the literature on clinical records of Traditional Asian medicine (TAM), on which TAM theories about symptoms-herbs relationships are based. In this study, we aim to quantitatively explore the relationships between symptoms and herbs in Shanghanlun. The text in Shanghanlun was converted into structured data. Using the structured data, Term Frequency - Inverse Document Frequency (TF-IDF) scores of symptoms and herbs were calculated from each chapter to derive the major symptoms and herbs in each chapter. To understand the structure of the entire document, principal component analysis (PCA) was performed for the 6-dimensional chapter space. Bipartite network analysis was conducted focusing on Jaccard scores between symptoms and herbs and eigenvector centralities of nodes. TF-IDF scores showed the characteristics of each chapter through major symptoms and herbs. Principal components drawn by PCA suggested the entire structure of Shanghanlun. The network analysis revealed a 'multi herbs - multi symptoms' relationship. Common symptoms and herbs were drawn from high eigenvector centralities of their nodes, while specific symptoms and herbs were drawn from low centralities. Symptoms expected to be treated by herbs were derived, respectively. Using measurable metrics, we conducted a computational study on patterns of Shanghanlun. Quantitative researches on TAM theories will contribute to improving the clarity of TAM theories.

Analysis of Information Education Related Theses Using R Program (R을 활용한 정보교육관련 논문 분석)

  • Park, SunJu
    • Journal of The Korean Association of Information Education
    • /
    • v.21 no.1
    • /
    • pp.57-66
    • /
    • 2017
  • Lately, academic interests in big data analysis and social network has been prominently raised. Various academic fields are involved in this social network based research trend, which is, social network has been actively used as the research topic in social science field as well as in natural science field. Accordingly, this paper focuses on the text analysis and the following social network analysis with the Master's and Doctor's dissertations. The result indicates that certain words had a high frequency throughout the entire period and some words had fluctuating frequencies in different period. In detail, the words with a high frequency had a higher betweenness centrality and each period seems to have a distinctive research flow. Therefore, it was found that the subjects of the Master's and Doctor's dissertations were changed sensitively to the development of IT technology and changes in information curriculum of elementary, middle and high school. It is predicted that researches related to smart, mobile, smartphone, SNS, application, storytelling, multicultural, and STEAM, which had an increased frequency in period 4, would be continuously conducted. Moreover, the topics of robots, programming, coding, algorithms, creativity, interaction, and privacy will also be studied steadily.

The syllable recovrey rule-based system and the application of a morphological analysis method for the post-processing of a continuous speech recognition (연속음성인식 후처리를 위한 음절 복원 rule-based 시스템과 형태소분석기법의 적용)

  • 박미성;김미진;김계성;최재혁;이상조
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.36C no.3
    • /
    • pp.47-56
    • /
    • 1999
  • Various phonological alteration occurs when we pronounce continuously in korean. This phonological alteration is one of the major reasons which make the speech recognition of korean difficult. This paper presents a rule-based system which converts a speech recognition character string to a text-based character string. The recovery results are morphologically analyzed and only a correct text string is generated. Recovery is executed according to four kinds of rules, i.e., a syllable boundary final-consonant initial-consonant recovery rule, a vowel-process recovery rule, a last syllable final-consonant recovery rule and a monosyllable process rule. We use a x-clustering information for an efficient recovery and use a postfix-syllable frequency information for restricting recovery candidates to enter morphological analyzer. Because this system is a rule-based system, it doesn't necessitate a large pronouncing dictionary or a phoneme dictionary and the advantage of this system is that we can use the being text based morphological analyzer.

  • PDF