• Title/Summary/Keyword: Text frequency analysis

Search Result 469, Processing Time 0.033 seconds

Keyword Analysis of Arboretums and Botanical Gardens Using Social Big Data

  • Shin, Hyun-Tak;Kim, Sang-Jun;Sung, Jung-Won
    • Journal of People, Plants, and Environment
    • /
    • v.23 no.2
    • /
    • pp.233-243
    • /
    • 2020
  • This study collects social big data used in various fields in the past 9 years and explains the patterns of major keywords of the arboretums and botanical gardens to use as the basic data to establish operational strategies for future arboretums and botanical gardens. A total of 6,245,278 cases of data were collected: 4,250,583 from blogs (68.1%), 1,843,677 from online cafes (29.5%), and 151,018 from knowledge search engine (2.4%). As a result of refining valid data, 1,223,162 cases were selected for analysis. We came up with keywords through big data, and used big data program Textom to derive keywords of arboretums and botanical gardens using text mining analysis. As a result, we identified keywords such as 'travel', 'picnic', 'children', 'festival', 'experience', 'Garden of Morning Calm', 'program', 'recreation forest', 'healing', and 'museum'. As a result of keyword analysis, we found that keywords such as 'healing', 'tree', 'experience', 'garden', and 'Garden of Morning Calm' received high public interest. We conducted word cloud analysis by extracting keywords with high frequency in total 6,245,278 titles on social media. The results showed that arboretums and botanical gardens were perceived as spaces for relaxation and leisure such as 'travel', 'picnic' and 'recreation', and that people had high interest in educational aspects with keywords such as 'experience' and 'field trip'. The demand for rest and leisure space, education, and things to see and enjoy in arboretums and botanical gardens increased than in the past. Therefore, there must be differentiation and specialization strategies such as plant collection strategies, exhibition planning and programs in establishing future operation strategies.

Research Trends on Doctor's Job Competencies in Korea Using Text Network Analysis (텍스트네트워크 분석을 활용한 국내 의사 직무역량 연구동향 분석)

  • Kim, Young Jon;Lee, Jea Woog;Yune, So Jung
    • Korean Medical Education Review
    • /
    • v.24 no.2
    • /
    • pp.93-102
    • /
    • 2022
  • We use the concept of the "doctor's role" as a guideline for developing medical education programs for medical students, residents, and doctors. Therefore, we should regularly reflect on the times and social needs to develop a clear sense of that role. The objective of the present study was to understand the knowledge structure related to doctor's job competencies in Korea. We analyzed research trends related to doctor's job competencies in Korea Citation Index journals using text network analysis through an integrative approach focusing on identifying social issues. We finally selected 1,354 research papers related to doctor's job competencies from 2011 to 2020, and we analyzed 2,627 words through data pre-processing with the NetMiner ver. 4.2 program (Cyram Inc., Seongnam, Korea). We conducted keyword centrality analysis, topic modeling, frequency analysis, and linear regression analysis using NetMiner ver. 4.2 (Cyram Inc.) and IBM SPSS ver. 23.0 (IBM Corp., Armonk, NY, USA). As a result of the study, words such as "family," "revision," and "rejection" appeared frequently. In topic modeling, we extracted five potential topics: "topic 1: Life and death in medical situations," "topic 2: Medical practice under the Medical Act," "topic 3: Medical malpractice and litigation," "topic 4: Medical professionalism," and "topic 5: Competency development education for medical students." Although there were no statistically significant changes in the research trends for each topic over time, it is nonetheless known that social changes could affect the demand for doctor's job competencies.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Importance-Performance Analysis for Korea Mobile Banking Applications: Using Google Playstore Review Data (국내 모바일 뱅킹 애플리케이션에 대한 이용자 중요도-만족도 분석(IPA): 구글 플레이스토어 리뷰 데이터를 활용하여)

  • Sohui, Kim;Moogeon, Kim;Min Ho, Ryu
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.27 no.6
    • /
    • pp.115-126
    • /
    • 2022
  • The purpose of this study is to try to IPA(Importance-Performance Analysis) by applying text mining approaches to user review data for korea mobile banking applications, and to derive priorities for improvement. User review data on mobile banking applications of korea commercial banks (Kookmin Bank, Shinhan Bank, Woori Bank, Hana Bank), local banks (Gyeongnam Bank, Busan Bank), and Internet banks (Kakao Bank, K-Bank, Toss) that gained from Google playstore were used. And LDA topic modeling, frequency analysis, and sentiment analysis were used to derive key attributes and measure the importance and satisfaction of each attribute. Result, although 'Authorizing service', 'Improvement of Function', 'Login', 'Speed/Connectivity', 'System/Update' and 'Banking Service' are relatively important attributes when users use mobile banking applications, their satisfaction is not at the average level, indicating that improvement is urgent.

Analysis of External Representations in Matter Units of 7th Grade Science Textbooks Developed Under the 2015 Revised National Curriculum (2015 개정 교육과정에 따른 7학년 과학교과서 물질 영역에 제시된 외적 표상의 분석)

  • Yoon, Heojeong
    • Journal of The Korean Association For Science Education
    • /
    • v.40 no.1
    • /
    • pp.61-75
    • /
    • 2020
  • In this study, external representation presented in two units, 'Property of Gas' and 'Changes of States of Matter,' in seventh grade of 2015 revised science curriculum, were analyzed to suggest educational implications. External representations presented in five science textbooks were analyzed according to the six criteria, which were 'type of representation,' 'interpretation of surface features,' 'relatedness to text,' 'existence and properties of a caption,' 'degree of correlation between representations comprising a multiple one,' and 'function of representation.' The characteristics of typical representations related to each achievement standard of two units were also analyzed. The results were as follows: The macro representations for 'type of representation', and explicit representations for 'interpretation of surface features' showed highest frequency. For 'relatedness to text' criteria, 'completely related and linked' and 'completely related and unlinked' representations showed the highest frequency. It means that most representations were properly related with the text. There were appropriate captions for most representations. The degree of correlation between representations comprising a multiple one was largely sufficiently linked with regards to the criteria 'degree of correlation between representations comprising a multiple one'. The complete representations for 'function of representation' showed the highest frequency in the aggregate, however incomplete representations showed more frequencies in the inquiry parts. The typical representations for each achievement standard differed in terms of the type, contained information, used symbols and so on. The educational implications with the use of representations presented in seventh grade textbook were discussed.

A Method of Intonation Modeling for Corpus-Based Korean Speech Synthesizer (코퍼스 기반 한국어 합성기의 억양 구현 방안)

  • Kim, Jin-Young;Park, Sang-Eon;Eom, Ki-Wan;Choi, Seung-Ho
    • Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.193-208
    • /
    • 2000
  • This paper describes a multi-step method of intonation modeling for corpus-based Korean speech synthesizer. We selected 1833 sentences considering various syntactic structures and built a corresponding speech corpus uttered by a female announcer. We detected the pitch using laryngograph signals and manually marked the prosodic boundaries on recorded speech, and carried out the tagging of part-of-speech and syntactic analysis on the text. The detected pitch was separated into 3 frequency bands of low, mid, high frequency components which correspond to the baseline, the word tone, and the syllable tone. We predicted them using the CART method and the Viterbi search algorithm with a word-tone-dictionary. In the collected spoken sentences, 1500 sentences were trained and 333 sentences were tested. In the layer of word tone modeling, we compared two methods. One is to predict the word tone corresponding to the mid-frequency components directly and the other is to predict it by multiplying the ratio of the word tone to the baseline by the baseline. The former method resulted in a mean error of 12.37 Hz and the latter in one of 12.41 Hz, similar to each other. In the layer of syllable tone modeling, it resulted in a mean error rate less than 8.3% comparing with the mean pitch, 193.56 Hz of the announcer, so its performance was relatively good.

  • PDF

An Analysis of Indications of Meridians in DongUiBoGam Using Data Mining (데이터마이닝을 이용한 동의보감에서 경락의 주치특성 분석)

  • Chae, Younbyoung;Ryu, Yeonhee;Jung, Won-Mo
    • Korean Journal of Acupuncture
    • /
    • v.36 no.4
    • /
    • pp.292-299
    • /
    • 2019
  • Objectives : DongUiBoGam is one of the representative medical literatures in Korea. We used text mining methods and analyzed the characteristics of the indications of each meridian in the second chapter of DongUiBoGam, WaeHyeong, which addresses external body elements. We also visualized the relationships between the meridians and the disease sites. Methods : Using the term frequency-inverse document frequency (TF-IDF) method, we quantified values regarding the indications of each meridian according to the frequency of the occurrences of 14 meridians and 14 disease sites. The spatial patterns of the indications of each meridian were visualized on a human body template according to the TF-IDF values. Using hierarchical clustering methods, twelve meridians were clustered into four groups based on the TF-IDF distributions of each meridian. Results : TF-IDF values of each meridian showed different constellation patterns at different disease sites. The spatial patterns of the indications of each meridian were similar to the route of the corresponding meridian. Conclusions : The present study identified spatial patterns between meridians and disease sites. These findings suggest that the constellations of the indications of meridians are primarily associated with the lines of the meridian system. We strongly believe that these findings will further the current understanding of indications of acupoints and meridians.

A Research on Difference Between Consumer Perception of Slow Fashion and Consumption Behavior of Fast Fashion: Application of Topic Modelling with Big Data

  • YANG, Oh-Suk;WOO, Young-Mok;YANG, Yae-Rim
    • The Journal of Economics, Marketing and Management
    • /
    • v.9 no.1
    • /
    • pp.1-14
    • /
    • 2021
  • Purpose: The article deals with the proposition that consumers' fashion consumption behavior will still follow the consumption behavior of fast fashion, despite recognizing the importance of slow fashion. Research design, data and methodology: The research model to verify this proposition is topic modelling with big data including unstructured textual data. we combined 5,506 news articles posted on Naver news search platform during the 2003-2019 period about fast fashion and slow fashion, high-frequency words have been derived, and topics have been found using LDA model. Based on these, we examined consumers' perception and consumption behavior on slow fashion through the analysis of Topic Network. Results: (1) Looking at the status of annual article collection, consumers' interest in slow fashion mainly began in 2005 and showed a steady increase up to 2019. (2) Term Frequency analysis showed that the keywords for slow fashion are the lowest, with consumers' consumption patterns continuing around 'brand.' (3) Each topic's weight in articles showed that 'social value' - which includes slow fashion - ranked sixth among the 9 topics, low linkage with other topics. (4) Lastly, 'brand' and 'fashion trend' were key topics, and the topic 'social value' accounted for a low proportion. Conclusion: Slow fashion was not a considerable factor of consumption behavior. Consumption patterns in fashion sector are still dominated by general consumption patterns centered on brands and fast fashion.

Text-Independent Speaker Identification System Based On Vowel And Incremental Learning Neural Networks

  • Heo, Kwang-Seung;Lee, Dong-Wook;Sim, Kwee-Bo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1042-1045
    • /
    • 2003
  • In this paper, we propose the speaker identification system that uses vowel that has speaker's characteristic. System is divided to speech feature extraction part and speaker identification part. Speech feature extraction part extracts speaker's feature. Voiced speech has the characteristic that divides speakers. For vowel extraction, formants are used in voiced speech through frequency analysis. Vowel-a that different formants is extracted in text. Pitch, formant, intensity, log area ratio, LP coefficients, cepstral coefficients are used by method to draw characteristic. The cpestral coefficients that show the best performance in speaker identification among several methods are used. Speaker identification part distinguishes speaker using Neural Network. 12 order cepstral coefficients are used learning input data. Neural Network's structure is MLP and learning algorithm is BP (Backpropagation). Hidden nodes and output nodes are incremented. The nodes in the incremental learning neural network are interconnected via weighted links and each node in a layer is generally connected to each node in the succeeding layer leaving the output node to provide output for the network. Though the vowel extract and incremental learning, the proposed system uses low learning data and reduces learning time and improves identification rate.

  • PDF

Measuring the Confidence of Human Disaster Risk Case based on Text Mining (텍스트마이닝 기반의 인적재난사고사례 신뢰도 측정연구)

  • Lee, Young-Jai;Lee, Sung-Soo
    • The Journal of Information Systems
    • /
    • v.20 no.3
    • /
    • pp.63-79
    • /
    • 2011
  • Deducting the risk level of infrastructure and buildings based on past human disaster risk cases and implementing prevention measures are important activities for disaster prevention. The object of this study is to measure the confidence to proceed quantitative analysis of various disaster risk cases through text mining methodology. Indeed, by examining confidence calculation process and method, this study suggests also a basic quantitative framework. The framework to measure the confidence is composed into four stages. First step describes correlation by categorizing basic elements based on human disaster ontology. Secondly, terms and cases of Term-Document Matrix will be created and the frequency of certain cases and terms will be quantified, the correlation value will be added to the missing values. In the third stage, association rules will be created according to the basic elements of human disaster risk cases. Lastly, the confidence value of disaster risk cases will be measured through association rules. This kind of confidence value will become a key element when deciding a risk level of a new disaster risk, followed up by preventive measures. Through collection of human disaster risk cases related to road infrastructure, this study will demonstrate a case where the four steps of the quantitative framework and process had been actually used for verification.