• Title/Summary/Keyword: 텍스트 출현 빈도

Search Result 102, Processing Time 0.023 seconds

Analysis of Consumer Awareness of Cycling Wear Using Web Mining (웹마이닝을 활용한 사이클웨어 소비자 인식 분석)

  • Kim, Chungjeong;Yi, Eunjou
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.5
    • /
    • pp.640-649
    • /
    • 2018
  • This study analyzed the consumer awareness of cycling wear using web mining, one of the big data analysis methods. For this, the texts of postings and comments related to cycling wear from 2006 to 2017 at Naver cafe, 'people who commute by bicycle' were collected and analyzed using R packages. A total of 15,321 documents were used for data analysis. The keywords of cycling wear were extracted using a Korean morphological analyzer (KoNLP) and converted to TDM (Term Document Matrix) and co-occurrence matrix to calculate the frequency of the keywords. The most frequent keyword in cycling wear was 'tights', including the opinion that they feel embarrassed because they are too tight. When they purchase cycling wear, they appeared to consider 'price', 'size', and 'brand'. Recently 'low price' and 'cost effectiveness' have become more frequent since 2016 than before, which indicates that consumers tend to prefer practical products. Moreover, the findings showed that it is necessary to improve not only the design and wearability, but also the material functionality, such as sweat-absorbance and quick drying, and the function of pad. These showed similar results to previous studies using a questionnaire. Therefore, it is expected to be used as an objective indicator that can be reflected in product development by real-time analysis of the opinions and requirements of consumers using web mining.

A Study on the Changes in Perspectives on Unwed Mothers in S.Korea and the Direction of Government Polices: 1995~2020 Social Media Big Data Analysis (한국미혼모에 대한 관점 변화와 정부정책의 방향: 1995년~2020년 소셜미디어 빅데이터 분석)

  • Seo, Donghee;Jun, Boksun
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.12
    • /
    • pp.305-313
    • /
    • 2021
  • This study collected and analyzed big data from 1995 to 2020, focusing on the keywords "unwed mother", "single mother," and "single mom" to present appropriate government support policy directions according to changes in perspectives on unwed mothers. Big data collection platform Textom was used to collect data from portal search sites Naver and Daum and refine data. The final refined data were word frequency analysis, TF-IDF analysis, an N-gram analysis provided by Textom. In addition, Network analysis and CONCOR analysis were conducted through the UCINET6 program. As a result of the study, similar words appeared in word frequency analysis and TF-IDF analysis, but they differed by year. In the N-gram analysis, there were similarities in word appearance, but there were many differences in frequency and form of words appearing in series. As a result of CONCOR analysis, it was found that different clusters were formed by year. This study confirms the change in the perspective of unwed mothers through big data analysis, suggests the need for unwed mothers policies for various options for independent women, and policies that embrace pregnancy, childbirth, and parenting without discrimination within the new family form.

The Study on the Software Educational Needs by Applying Text Content Analysis Method: The Case of the A University (텍스트 내용분석 방법을 적용한 소프트웨어 교육 요구조사 분석: A대학을 중심으로)

  • Park, Geum-Ju
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.3
    • /
    • pp.65-70
    • /
    • 2019
  • The purpose of this study is to understand the college students' needs for software curriculum which based on surveys from educational satisfaction of the software lecture evaluation, as well as to find out the improvement plan by applying the text content analysis method. The research method used the text content analysis program to calculate the frequency of words occurrence, key words selection, co-occurrence frequency of key words, and analyzed the text center and network analysis by using the network analysis program. As a result of this research, the decent points of the software education network are mentioned with 'lecturer' is the most frequently occurrence after then with 'kindness', 'student', 'explanation', 'coding'. The network analysis of the shortage points has been the most mention of 'lecture', 'wish to', 'student', 'lecturer', 'assignment', 'coding', 'difficult', and 'announcement' which are mentioned together. The comprehensive network analysis of both good and shortage points has compared among key words, we can figure out difference among the key words: for example, 'group activity or task', 'assignment', 'difficulty on level of lecture', and 'thinking about lecturer'. Also, from this difference, we can provide that the lack of proper role of individual staff at group activities, difficult and excessive tasks, awareness of the difficulty and necessity of software education, lack of instructor's teaching method and feedback. Therefore, it is necessary to examine not only how the grouping of software education (activities) and giving assignments (or tasks), but also how carried out group activities and tasks and monitored about the contents of lectures, teaching methods, the ratio of practice and design thinking.

Automatic Generation of Concatenate Morphemes for Korean LVCSR (대어휘 연속음성 인식을 위한 결합형태소 자동생성)

  • 박영희;정민화
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.4
    • /
    • pp.407-414
    • /
    • 2002
  • In this paper, we present a method that automatically generates concatenate morpheme based language models to improve the performance of Korean large vocabulary continuous speech recognition. The focus was brought into improvement against recognition errors of monosyllable morphemes that occupy 54% of the training text corpus and more frequently mis-recognized. Knowledge-based method using POS patterns has disadvantages such as the difficulty in making rules and producing many low frequency concatenate morphemes. Proposed method automatically selects morpheme-pairs from training text data based on measures such as frequency, mutual information, and unigram log likelihood. Experiment was performed using 7M-morpheme text corpus and 20K-morpheme lexicon. The frequency measure with constraint on the number of morphemes used for concatenation produces the best result of reducing monosyllables from 54% to 30%, bigram perplexity from 117.9 to 97.3. and MER from 21.3% to 17.6%.

A Study on the Research Trends in the Area of Geospatial-Information Using Text-mining Technique Focused on National R&D Reports and Theses (텍스트마이닝 기술을 이용한 공간정보 분야의 연구 동향에 관한 고찰 -국가연구개발사업 보고서 및 논문을 중심으로-)

  • Lim, Si Yeong;Yi, Mi Sook;Jin, Gi Ho;Shin, Dong Bin
    • Spatial Information Research
    • /
    • v.22 no.4
    • /
    • pp.11-20
    • /
    • 2014
  • This study aims to provide information about the research-trends in the area of Geospatial Information using text-mining methods. We derived the National R&D Reports and papers from NDSL(National Discovery for Science Leaders) site. And then we preprocessed their key-words and classified those in separable sectors. We investigated the appearance rates and changes of key-words for R&D reports and papers. As a result, we conformed that the researches concerning applications are increasing, while the researches dealing with systems are decreasing. Especially, with in the framework of the keyword, '3D-GIS', 'sensor' and 'service' xcept ITS are emerging. It could be helpful to investigate research items later.

FolksoViz: A Subsumption-based Folksonomy Visualization Using the Wikipedia (FolksoViz: Wikipedia 본문을 이용한 상하위 관계 기반 폭소노미 시각화 기법)

  • Lee, Kang-Pyo;Kim, Hyun-Woo;Jang, Chung-Su;Kim, Hyoung-Joo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.4
    • /
    • pp.401-411
    • /
    • 2008
  • Folksonomy, which is created through the collaborative tagging from many users, is one of the driving factors of Web 2.0. Tags are said to be the web metadata describing a web document. If we are able to find the semantic subsumption relationships between tags created through the collaborative tagging, it can help users understand the metadata more intuitively. In this paper, targeting del.icio.us tag data, we propose a method named FolksoViz for deriving subsumption relationships between tags by using Wikipedia texts. For this purpose, we propose a statistical model for deriving subsumption relationships based on the frequency of each tag on the Wikipedia texts, and TSD(Tag Sense Disambiguation) method for mapping each tag to a corresponding Wikipedia text. The derived subsumption pairs are visualized effectively on the screen. The experiment shows that our proposed algorithm managed to find the correct subsumption pairs with high accuracy.

A Study on an Automatic Summarization System Using Verb-Based Sentence Patterns (술어기반 문형정보를 이용한 자동요약시스템에 관한 연구)

  • 최인숙;정영미
    • Journal of the Korean Society for information Management
    • /
    • v.18 no.4
    • /
    • pp.37-55
    • /
    • 2001
  • The purpose of this study is to present a text summarization system using a knowledge base containing information about verbs and their arguments that are statistically obtained from a subject domain. The system consists of two modules: the training module and the summarization module. The training module is to extract cue verbs and their basic sentence patterns by counting the frequency of verbs and case markers respectively, and the summarization module is substantiate basic sentence patterns and to generate summaries. Basic sentence patterns are substantiated by applying substantiation rules to the syntactics structure of sentences. A summary is then produced by connecting simple sentences that the are generated through the substantiation module of basic sentence patterns. ‘robbery’in the daily newspapers are selected for a test collection. The system generates natural summaries without losing any essential information by combining both cue verbs and essential arguments. In addition, the use of statistical techniques makes it possible to apply this system to other subject domains through its learning capability.

  • PDF

Analyzing Architectural History Terminologies by Text Mining and Association Analysis (텍스트 마이닝과 연관 관계 분석을 이용한 건축역사 용어 분석)

  • Kim, Min-Jeong;Kim, Chul-Joo
    • Journal of Digital Convergence
    • /
    • v.15 no.1
    • /
    • pp.443-452
    • /
    • 2017
  • Architectural history traces the changes in architecture through various traditions, regions, overarching stylistic trends, and dates. This study identified terminologies related to the proximity and frequency in the architectural history areas by text mining and association analysis. This study explored terminologies by investigating articles published in the "Journal of Architectural History", a sole journal for the architectural history studies. First, key terminologies that appeared frequently were extracted from paper that had titles, keywords, and abstracts. Then, we analyzed some typical and specific key terminologies that appear frequently and partially depending on the research areas. Finally, association analysis was used to find the frequent patterns in the key terminologies. This research can be used as fundamental data for understanding issues and trends in areas on the architectural history.

WV-BTM: A Technique on Improving Accuracy of Topic Model for Short Texts in SNS (WV-BTM: SNS 단문의 주제 분석을 위한 토픽 모델 정확도 개선 기법)

  • Song, Ae-Rin;Park, Young-Ho
    • Journal of Digital Contents Society
    • /
    • v.19 no.1
    • /
    • pp.51-58
    • /
    • 2018
  • As the amount of users and data of NS explosively increased, research based on SNS Big data became active. In social mining, Latent Dirichlet Allocation(LDA), which is a typical topic model technique, is used to identify the similarity of each text from non-classified large-volume SNS text big data and to extract trends therefrom. However, LDA has the limitation that it is difficult to deduce a high-level topic due to the semantic sparsity of non-frequent word occurrence in the short sentence data. The BTM study improved the limitations of this LDA through a combination of two words. However, BTM also has a limitation that it is impossible to calculate the weight considering the relation with each subject because it is influenced more by the high frequency word among the combined words. In this paper, we propose a technique to improve the accuracy of existing BTM by reflecting semantic relation between words.

An Exploratory Study of VR Technology using Patents and News Articles (특허와 뉴스 기사를 이용한 가상현실 기술에 관한 탐색적 연구)

  • Kim, Sungbum
    • Journal of Digital Convergence
    • /
    • v.16 no.11
    • /
    • pp.185-199
    • /
    • 2018
  • The purpose of this study is to derive the core technologies of VR using patent analysis and to explore the direction of social and public interest in VR using news analysis. In Study 1, we derived keywords using the frequency of words in patent texts, and we compared by company, year, and technical classification. Netminer, a network analysis program, was used to analyze the IPC codes of patents. In Study 2, we analyzed news articles using T-LAB program. TF-IDF was used as a keyword selection method and chi-square and association index algorithms were used to extract the words most relevant to VR. Through this study, we confirmed that VR is a fusion technology including optics, head mounted display (HMD), data analysis, electric and electronic technology, and found that optical technology is the central technology among the technologies currently being developed. In addition, through news articles, we found that the society and the public are interested in the formation and growth of VR suppliers and markets, and VR should be developed on the basis of user experience.