• Title/Summary/Keyword: Frequency of Words

Search Result 885, Processing Time 0.032 seconds

The Research Trends and Keywords Modeling of Shoulder Rehabilitation using the Text-mining Technique (텍스트 마이닝 기법을 활용한 어깨 재활 연구분야 동향과 키워드 모델링)

  • Kim, Jun-hee;Jung, Sung-hoon;Hwang, Ui-jae
    • Journal of the Korean Society of Physical Medicine
    • /
    • v.16 no.2
    • /
    • pp.91-100
    • /
    • 2021
  • PURPOSE: This study analyzed the trends and characteristics of shoulder rehabilitation research through keyword analysis, and their relationships were modeled using text mining techniques. METHODS: Abstract data of 10,121 articles in which abstracts were registered on the MEDLINE of PubMed with 'shoulder' and 'rehabilitation' as keywords were collected using python. By analyzing the frequency of words, 10 keywords were selected in the order of the highest frequency. Word-embedding was performed using the word2vec technique to analyze the similarity of words. In addition, the groups were classified and analyzed based on the distance (cosine similarity) through the t-SNE technique. RESULTS: The number of studies related to shoulder rehabilitation is increasing year after year, keywords most frequently used in relation to shoulder rehabilitation studies are 'patient', 'pain', and 'treatment'. The word2vec results showed that the words were highly correlated with 12 keywords from studies related to shoulder rehabilitation. Furthermore, through t-SNE, the keywords of the studies were divided into 5 groups. CONCLUSION: This study was the first study to model the keywords and their relationships that make up the abstracts of research in the MEDLINE of Pub Med related to 'shoulder' and 'rehabilitation' using text-mining techniques. The results of this study will help increase the diversifying research topics of shoulder rehabilitation studies to be conducted in the future.

A Word Embedding used Word Sense and Feature Mirror Model (단어 의미와 자질 거울 모델을 이용한 단어 임베딩)

  • Lee, JuSang;Shin, JoonChoul;Ock, CheolYoung
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.4
    • /
    • pp.226-231
    • /
    • 2017
  • Word representation, an important area in natural language processing(NLP) used machine learning, is a method that represents a word not by text but by distinguishable symbol. Existing word embedding employed a large number of corpora to ensure that words are positioned nearby within text. However corpus-based word embedding needs several corpora because of the frequency of word occurrence and increased number of words. In this paper word embedding is done using dictionary definitions and semantic relationship information(hypernyms and antonyms). Words are trained using the feature mirror model(FMM), a modified Skip-Gram(Word2Vec). Sense similar words have similar vector. Furthermore, it was possible to distinguish vectors of antonym words.

The Relationship between Lexical Sophistication Features and English Proficiency for Korean College Students using TAALES Program (TAALES 프로그램을 활용하여 한국 대학생이 작성한 에세이에 나타난 어휘의 정교화 특성 비교)

  • Lee, Young-Ju
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.3
    • /
    • pp.433-438
    • /
    • 2021
  • This study investigates the relationship between lexical sophistication features and English proficiency for Korean college students. Essays from the ICNALE(International Corpus Network of Asian Learners of English) corpus were analyzed, using TAALES program. In order to examine whether or not there are statistically significant differences in lexical sophistication features across three groups, MANOVA was conducted. Results showed that the lexical sophistication features were significantly affected by English proficiency level. Essays written by Korean students with different English proficiency levels can be differentiated in terms of various lexical sophistication features including content words frequency, content words familiarity, lexical decision mean reaction time function words, hypernymy verbs, word naming response time function words, age of acquisition content words.

Understanding the Food Hygiene of Cruise through the Big Data Analytics using the Web Crawling and Text Mining

  • Shuting, Tao;Kang, Byongnam;Kim, Hak-Seon
    • Culinary science and hospitality research
    • /
    • v.24 no.2
    • /
    • pp.34-43
    • /
    • 2018
  • The objective of this study was to acquire a general and text-based awareness and recognition of cruise food hygiene through big data analytics. For the purpose, this study collected data with conducting the keyword "food hygiene, cruise" on the web pages and news on Google, during October 1st, 2015 to October 1st, 2017 (two years). The data collection was processed by SCTM which is a data collecting and processing program and eventually, 899 kb, approximately 20,000 words were collected. For the data analysis, UCINET 6.0 packaged with visualization tool-Netdraw was utilized. As a result of the data analysis, the words such as jobs, news, showed the high frequency while the results of centrality (Freeman's degree centrality and Eigenvector centrality) and proximity indicated the distinct rank with the frequency. Meanwhile, as for the result of CONCOR analysis, 4 segmentations were created as "food hygiene group", "person group", "location related group" and "brand group". The diagnosis of this study for the food hygiene in cruise industry through big data is expected to provide instrumental implications both for academia research and empirical application.

A Study on Ethnic Fashion from 1980 to 2009 -Focus on the Content Analysis of Vogue Magazine- (1980년 이후 에스닉 패션에 관한 연구 -1980년부터 2009년까지의 Vogue지 내용분석을 중심으로-)

  • Eun, Sook;Park, Jae-Ok
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.34 no.5
    • /
    • pp.726-739
    • /
    • 2010
  • This study investigates and compares the changes in ethnic fashion presented over a 30-year period to understand the diversity of ethnic trends according to historical trends. Data were collected from 59 volumes of "Vogue" magazine for January and July in each year from 1980 to 2009. The data used for content analysis consists of 407 words and these were condensed into three periods according to the decade (1980-1989, 1990-1999, and 2000-2009). The selected words were classified into five sub-themes according to previous research definitions such as Asian look, European look, American look, African look, and Oceanic look. The results are as follows. First, ethnic fashion was highly presented in the 1990s and 1980s, and decreased in the 2000s; of note is that the Asian look appeared more in the 1990s. Second, ethnic fashion showed a higher frequency of F/W seasons in the 1980s and S/S seasons in the 2000s, while both seasons had a higher frequency in the 1990s. The sub-themes of coexistence were presented 26seasons out of 59 seasons. The coexistence of the Asian-European look was evident in the 1980s and 2000s, while the sub-themes coexistence was more diverse in the 1990s. Third, the words selected from sub-themes of ethnic fashion demonstrated the differences by decade. In particular, various fabrics and patterns appeared in the 1990s.

Research on Designing Korean Emotional Dictionary using Intelligent Natural Language Crawling System in SNS (SNS대상의 지능형 자연어 수집, 처리 시스템 구현을 통한 한국형 감성사전 구축에 관한 연구)

  • Lee, Jong-Hwa
    • The Journal of Information Systems
    • /
    • v.29 no.3
    • /
    • pp.237-251
    • /
    • 2020
  • Purpose The research was studied the hierarchical Hangul emotion index by organizing all the emotions which SNS users are thinking. As a preliminary study by the researcher, the English-based Plutchick (1980)'s emotional standard was reinterpreted in Korean, and a hashtag with implicit meaning on SNS was studied. To build a multidimensional emotion dictionary and classify three-dimensional emotions, an emotion seed was selected for the composition of seven emotion sets, and an emotion word dictionary was constructed by collecting SNS hashtags derived from each emotion seed. We also want to explore the priority of each Hangul emotion index. Design/methodology/approach In the process of transforming the matrix through the vector process of words constituting the sentence, weights were extracted using TF-IDF (Term Frequency Inverse Document Frequency), and the dimension reduction technique of the matrix in the emotion set was NMF (Nonnegative Matrix Factorization) algorithm. The emotional dimension was solved by using the characteristic value of the emotional word. The cosine distance algorithm was used to measure the distance between vectors by measuring the similarity of emotion words in the emotion set. Findings Customer needs analysis is a force to read changes in emotions, and Korean emotion word research is the customer's needs. In addition, the ranking of the emotion words within the emotion set will be a special criterion for reading the depth of the emotion. The sentiment index study of this research believes that by providing companies with effective information for emotional marketing, new business opportunities will be expanded and valued. In addition, if the emotion dictionary is eventually connected to the emotional DNA of the product, it will be possible to define the "emotional DNA", which is a set of emotions that the product should have.

Exploring Information Ethics Issues based on Text Mining using Big Data from Web of Science (Web of Science 빅데이터를 활용한 텍스트 마이닝 기반의 정보윤리 이슈 탐색)

  • Kim, Han Sung
    • The Journal of Korean Association of Computer Education
    • /
    • v.22 no.3
    • /
    • pp.67-78
    • /
    • 2019
  • The purpose of this study is to explore information ethics issues based on academic big data from Web of Science (WoS) and to provide implications for information ethics education in informatics subject. To this end, 318 published papers from WoS related to information ethics were text mined. Specifically, this paper analyzed the frequency of key-words(TF, DF, TF-IDF), information ethics issues using topic modeling, and frequency of appearances by year for each issue. This paper used 'tm', 'topicmodel' package of R for text mining. The main results are as follows. First, this paper confirmed that the words 'digital', 'student', 'software', and 'privacy' were the main key-words through TF-IDF. Second, the topic modeling analysis showed 8 issues such as 'Professional value', 'Cyber-bullying', 'AI and Social Impact' et al., and the proportion of 'Professional value' and 'Cyber-bullying' was relatively high. This study discussed the implications for information ethics education in Korea based on the results of this analysis.

Latent Semantic Analysis Approach for Document Summarization Based on Word Embeddings

  • Al-Sabahi, Kamal;Zuping, Zhang;Kang, Yang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.1
    • /
    • pp.254-276
    • /
    • 2019
  • Since the amount of information on the internet is growing rapidly, it is not easy for a user to find relevant information for his/her query. To tackle this issue, the researchers are paying much attention to Document Summarization. The key point in any successful document summarizer is a good document representation. The traditional approaches based on word overlapping mostly fail to produce that kind of representation. Word embedding has shown good performance allowing words to match on a semantic level. Naively concatenating word embeddings makes common words dominant which in turn diminish the representation quality. In this paper, we employ word embeddings to improve the weighting schemes for calculating the Latent Semantic Analysis input matrix. Two embedding-based weighting schemes are proposed and then combined to calculate the values of this matrix. They are modified versions of the augment weight and the entropy frequency that combine the strength of traditional weighting schemes and word embedding. The proposed approach is evaluated on three English datasets, DUC 2002, DUC 2004 and Multilingual 2015 Single-document Summarization. Experimental results on the three datasets show that the proposed model achieved competitive performance compared to the state-of-the-art leading to a conclusion that it provides a better document representation and a better document summary as a result.

Term Distribution Index and Word2Vec Methods for Systematic Exploring and Understanding of the Rule on Occupational Safety and Health Standards (산업안전보건기준에 관한 규칙의 체계적 탐색과 이해를 위한 단어분포 지표와 Word2Vec 분석 방법)

  • Jae Ho Jeong;Seong Rok Chang;Yongyoon Suh
    • Journal of the Korean Society of Safety
    • /
    • v.38 no.3
    • /
    • pp.69-76
    • /
    • 2023
  • The purpose of the rules on the Occupational Safety and Health Standards (hereafter safety and health rules) is to regulate the safety and health measures stipulated in the Occupational Safety and Health Act and the specific instructions necessary for their implementation. However, the safety and health rules are extensive and complexly connected, making navigation difficult for users. In order for users to readily access safety and health rules, this study analyzed the frequency, distribution, and significance of terms included in the overall rules. First, the term distribution index was created based on the frequency and distribution of words extracted through text mining. The term distribution index derives from whether a word appears only in a specific chapter or across all rules. This allows users to effectively explore terms to be followed in a specific working environment and terms to be complied with in the overall working environment. Next, the related words of the previously derived terms were visualized through t-SNE and the Word2Vec algorithm. This can help prioritize the things that need to be managed first, focusing on key terms without checking the overall rules. Moreover, this study can help users explore safety and health rules by allowing them to understand the distribution of words and visualize related terms.

Recommendation System using Associative Web Document Classification by Word Frequency and α-Cut (단어 빈도와 α-cut에 의한 연관 웹문서 분류를 이용한 추천 시스템)

  • Jung, Kyung-Yong;Ha, Won-Shik
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.1
    • /
    • pp.282-289
    • /
    • 2008
  • Although there were some technological developments in improving the collaborative filtering, they have yet to fully reflect the actual relation of the items. In this paper, we propose the recommendation system using associative web document classification by word frequency and ${\alpha}$-cut to address the short comings of the collaborative filtering. The proposed method extracts words from web documents through the morpheme analysis and accumulates the weight of term frequency. It makes associative rules and applies the weight of term frequency to its confidence by using Apriori algorithm. And it calculates the similarity among the words using the hypergraph partition. Lastly, it classifies related web document by using ${\alpha}$-cut and calculates similarity by using adjusted cosine similarity. The results show that the proposed method significantly outperforms the existing methods.