• Title/Summary/Keyword: Text frequency analysis

Search Result 458, Processing Time 0.03 seconds

Sentence Similarity Analysis using Ontology Based on Cosine Similarity (코사인 유사도를 기반의 온톨로지를 이용한 문장유사도 분석)

  • Hwang, Chi-gon;Yoon, Chang-Pyo;Yun, Dai Yeol
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.441-443
    • /
    • 2021
  • Sentence or text similarity is a measure of the degree of similarity between two sentences. Techniques for measuring text similarity include Jacquard similarity, cosine similarity, Euclidean similarity, and Manhattan similarity. Currently, the cosine similarity technique is most often used, but since this is an analysis according to the occurrence or frequency of a word in a sentence, the analysis on the semantic relationship is insufficient. Therefore, we try to improve the efficiency of analysis on the similarity of sentences by giving relations between words using ontology and including semantic similarity when extracting words that are commonly included in two sentences.

  • PDF

Analysis of Information Education Related Theses Using R Program (R을 활용한 정보교육관련 논문 분석)

  • Park, SunJu
    • Journal of The Korean Association of Information Education
    • /
    • v.21 no.1
    • /
    • pp.57-66
    • /
    • 2017
  • Lately, academic interests in big data analysis and social network has been prominently raised. Various academic fields are involved in this social network based research trend, which is, social network has been actively used as the research topic in social science field as well as in natural science field. Accordingly, this paper focuses on the text analysis and the following social network analysis with the Master's and Doctor's dissertations. The result indicates that certain words had a high frequency throughout the entire period and some words had fluctuating frequencies in different period. In detail, the words with a high frequency had a higher betweenness centrality and each period seems to have a distinctive research flow. Therefore, it was found that the subjects of the Master's and Doctor's dissertations were changed sensitively to the development of IT technology and changes in information curriculum of elementary, middle and high school. It is predicted that researches related to smart, mobile, smartphone, SNS, application, storytelling, multicultural, and STEAM, which had an increased frequency in period 4, would be continuously conducted. Moreover, the topics of robots, programming, coding, algorithms, creativity, interaction, and privacy will also be studied steadily.

An Exploratory Analysis of Online Discussion of Library and Information Science Professionals in India using Text Mining

  • Garg, Mohit;Kanjilal, Uma
    • Journal of Information Science Theory and Practice
    • /
    • v.10 no.3
    • /
    • pp.40-56
    • /
    • 2022
  • This paper aims to implement a topic modeling technique for extracting the topics of online discussions among library professionals in India. Topic modeling is the established text mining technique popularly used for modeling text data from Twitter, Facebook, Yelp, and other social media platforms. The present study modeled the online discussions of Library and Information Science (LIS) professionals posted on Lis Links. The text data of these posts was extracted using a program written in R using the package "rvest." The data was pre-processed to remove blank posts, posts having text in non-English fonts, punctuation, URLs, emails, etc. Topic modeling with the Latent Dirichlet Allocation algorithm was applied to the pre-processed corpus to identify each topic associated with the posts. The frequency analysis of the occurrence of words in the text corpus was calculated. The results found that the most frequent words included: library, information, university, librarian, book, professional, science, research, paper, question, answer, and management. This shows that the LIS professionals actively discussed exams, research, and library operations on the forum of Lis Links. The study categorized the online discussions on Lis Links into ten topics, i.e. "LIS Recruitment," "LIS Issues," "Other Discussion," "LIS Education," "LIS Research," "LIS Exams," "General Information related to Library," "LIS Admission," "Library and Professional Activities," and "Information Communication Technology (ICT)." It was found that the majority of the posts belonged to "LIS Exam," followed by "Other Discussions" and "General Information related to the Library."

Exploring the Core Keywords of the Secondary School Home Economics Teacher Selection Test: A Mixed Method of Content and Text Network Analyses (중등학교 가정과교사 임용시험의 핵심 키워드 탐색: 내용 분석과 텍스트 네트워크 분석을 중심으로)

  • Mi Jeong, Park;Ju, Han
    • Human Ecology Research
    • /
    • v.60 no.4
    • /
    • pp.625-643
    • /
    • 2022
  • The purpose of this study was to explore the trends and core keywords of the secondary school home economics teacher selection test using content analysis and text network analysis. The sample comprised texts of the secondary school home economics teacher 1st selection test for the 2017-2022 school years. Determination of frequency of occurrence, generation of word clouds, centrality analysis, and topic modeling were performed using NetMiner 4.4. The key results were as follows. First, content analysis revealed that the number of questions and scores for each subject (field) has remained constant since 2020, unlike before 2020. In terms of subjects, most questions focused on 'theory of home economics education', and among the evaluation content elements, the highest percentage of questions asked was for 'home economics teaching·learning methods and practice'. Second, the network of the secondary school home economics teacher selection test covering the 2017-2022 school years has an extremely weak density. For the 2017-2019 school years, 'learning', 'evaluation', 'instruction', and 'method' appeared as important keywords, and 7 topics were extracted. For the 2020-2022 school years, 'evaluation', 'class', 'learning', 'cycle', and 'model' were influential keywords, and five topics were extracted. This study is meaningful in that it attempted a new research method combining content analysis and text network analysis and prepared basic data for the revision of the evaluation area and evaluation content elements of the secondary school home economics teacher selection test.

Occupational Therapy in Long-Term Care Insurance For the Elderly Using Text Mining (텍스트 마이닝을 활용한 노인장기요양보험에서의 작업치료: 2007-2018년)

  • Cho, Min Seok;Baek, Soon Hyung;Park, Eom-Ji;Park, Soo Hee
    • Journal of Society of Occupational Therapy for the Aged and Dementia
    • /
    • v.12 no.2
    • /
    • pp.67-74
    • /
    • 2018
  • Objective : The purpose of this study is to quantitatively analyze the role of occupational therapy in long - term care insurance for the elderly using text mining, one of the big data analysis techniques. Method : For the analysis of newspaper articles, "Long - Term Care Insurance for the Elderly + Occupational Therapy for the Elderly" was collected after the period from 2007 to 208. Naver, which has a high share of the domestic search engine, utilized the database of Naver News by utilizing Textom, a web crawling tool. After collecting the article title and original text of 510 news data from the collection of the elderly long term care insurance + occupational therapy search, we analyzed the article frequency and key words by year. Result : In terms of the frequency of articles published by year, the number of articles published in 2015 and 2017 was the highest with 70 articles (13.7%), and the top 10 terms of the key word analysis showed the highest frequency of 'dementia' (344) In terms of key words, dementia, treatment, hospital, health, service, rehabilitation, facilities, institution, grade, elderly, professional, salary, industrial complex and people are related. Conclusion : In this study, it is meaningful that the textual mining technique was used to more objectively confirm the social needs and the role of the occupational therapist for the dementia and rehabilitation in the related key keywords based on the media reporting trend of the elderly long - term care insurance for 11 years. Based on the results of this study, future research should expand research field and period and supplement the research methodology through various analysis methods according to the year.

Text Mining Analysis Technique on ECDIS Accident Report (텍스트 마이닝 기법을 활용한 ECDIS 사고보고서 분석)

  • Lee, Jeong-Seok;Lee, Bo-Kyeong;Cho, Ik-Soon
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.25 no.4
    • /
    • pp.405-412
    • /
    • 2019
  • SOLAS requires that ECDIS be installed on ships of more than 500 gross tonnage engaged in international navigation until the first inspection arriving after July 1, 2018. Several accidents related to the use of ECDIS have occurred with its installation as a new major navigation instrument. The 12 incident reports issued by MAIB, BSU, BEAmer, DMAIB, and DSB were analyzed, and the cause of accident was determined to be related to the operation of the navigator and the ECDIS system. The text was analyzed using the R-program to quantitatively analyze words related to the cause of the accident. We used text mining techniques such as Wordcloud, Wordnetwork and Wordweight to represent the importance of words according to their frequency of derivation. Wordcloud uses the N-gram model as a way of expressing the frequency of used words in cloud form. As a result of the uni-gram analysis of the N-gram model, ECDIS words were obtained the most, and the bi-gram analysis results showed that the word "Safety Contour" was used most frequently. Based on the bi-gram analysis, the causative words are classified into the officer and the ECDIS system, and the related words are represented by Wordnetwork. Finally, the related words with the of icer and the ECDIS system were composed of word corpus, and Wordweight was applied to analyze the change in corpus frequency by year. As a result of analyzing the tendency of corpus variation with the trend line graph, more recently, the corpus of the officer has decreased, and conversely, the corpus of the ECDIS system is gradually increasing.

Development of big data based Skin Care Information System SCIS for skin condition diagnosis and management

  • Kim, Hyung-Hoon;Cho, Jeong-Ran
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.3
    • /
    • pp.137-147
    • /
    • 2022
  • Diagnosis and management of skin condition is a very basic and important function in performing its role for workers in the beauty industry and cosmetics industry. For accurate skin condition diagnosis and management, it is necessary to understand the skin condition and needs of customers. In this paper, we developed SCIS, a big data-based skin care information system that supports skin condition diagnosis and management using social media big data for skin condition diagnosis and management. By using the developed system, it is possible to analyze and extract core information for skin condition diagnosis and management based on text information. The skin care information system SCIS developed in this paper consists of big data collection stage, text preprocessing stage, image preprocessing stage, and text word analysis stage. SCIS collected big data necessary for skin diagnosis and management, and extracted key words and topics from text information through simple frequency analysis, relative frequency analysis, co-occurrence analysis, and correlation analysis of key words. In addition, by analyzing the extracted key words and information and performing various visualization processes such as scatter plot, NetworkX, t-SNE, and clustering, it can be used efficiently in diagnosing and managing skin conditions.

A Study on the General Public's Perceptions of Dental Fear Using Unstructured Big Data

  • Han-A Cho;Bo-Young Park
    • Journal of dental hygiene science
    • /
    • v.23 no.4
    • /
    • pp.255-263
    • /
    • 2023
  • Background: This study used text mining techniques to determine public perceptions of dental fear, extracted keywords related to dental fear, identified the connection between the keywords, and categorized and visualized perceptions related to dental fear. Methods: Keywords in texts posted on Internet portal sites (NAVER and Google) between 1 January, 2000, and 31 December, 2022, were collected. The four stages of analysis were used to explore the keywords: frequency analysis, term frequency-inverse document frequency (TF-IDF), centrality analysis and co-occurrence analysis, and convergent correlations. Results: In the top ten keywords based on frequency analysis, the most frequently used keyword was 'treatment,' followed by 'fear,' 'dental implant,' 'conscious sedation,' 'pain,' 'dental fear,' 'comfort,' 'taking medication,' 'experience,' and 'tooth.' In the TF-IDF analysis, the top three keywords were dental implant, conscious sedation, and dental fear. The co-occurrence analysis was used to explore keywords that appear together and showed that 'fear and treatment' and 'treatment and pain' appeared the most frequently. Conclusion: Texts collected via unstructured big data were analyzed to identify general perceptions related to dental fear, and this study is valuable as a source data for understanding public perceptions of dental fear by grouping associated keywords. The results of this study will be helpful to understand dental fear and used as factors affecting oral health in the future.

Region Analysis of Business Card Images Acquired in PDA Using DCT and Information Pixel Density (DCT와 정보 화소 밀도를 이용한 PDA로 획득한 명함 영상에서의 영역 해석)

  • 김종흔;장익훈;김남철
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.8C
    • /
    • pp.1159-1174
    • /
    • 2004
  • In this paper, we present an efficient algorithm for region analysis of business card images acquired in a PDA by using DCT and information pixel density. The proposed method consists of three parts: region segmentation, information region classification, and text region classification. In the region segmentation, an input business card image is partitioned into 8 f8 blocks and the blocks are classified into information and background blocks using the normalized DCT energy in their low frequency bands. The input image is then segmented into information and background regions by region labeling on the classified blocks. In the information region classification, each information region is classified into picture region or text region by using a ratio of the DCT energy of horizontal and vertical edge components to that in low frequency band and a density of information pixels, that are black pixels in its binarized region. In the text region classification, each text region is classified into large character region or small character region by using the density of information pixels and an averaged horizontal and vertical run-lengths of information pixels. Experimental results show that the proposed method yields good performance of region segmentation, information region classification, and text region classification for test images of several types of business cards acquired by a PDA under various surrounding conditions. In addition, the error rates of the proposed region segmentation are about 2.2-10.1% lower than those of the conventional region segmentation methods. It is also shown that the error rates of the proposed information region classification is about 1.7% lower than that of the conventional information region classification method.

100 Article Paper Text Minning Data Analysis and Visualization in Web Environment (웹 환경에서 100 논문에 대한 텍스트 마이닝, 데이터 분석과 시각화)

  • Li, Xiaomeng;Li, Jiapei;Lee, HyunChang;Shin, SeongYoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.157-158
    • /
    • 2017
  • There is a method to analyze the big data of the article and text mining by using Python language. And Python is a kind of programming language and it is easy to operating. Reaserch and use Python to creat a Web environment that the research result of the analysis can show directly on the browser. In this thesis, there are 100 article paper frrom Altmetric, Altmetric tracks a range of sources to capture. It is necessary to collect and analyze the big data use an effictive method, After the result coming out, Use Python wordcloud to make a directive image that can show the highest frequency of words.

  • PDF