• Title/Summary/Keyword: word frequency matrix

Search Result 21, Processing Time 0.022 seconds

Word Sense Similarity Clustering Based on Vector Space Model and HAL (벡터 공간 모델과 HAL에 기초한 단어 의미 유사성 군집)

  • Kim, Dong-Sung
    • Korean Journal of Cognitive Science
    • /
    • v.23 no.3
    • /
    • pp.295-322
    • /
    • 2012
  • In this paper, we cluster similar word senses applying vector space model and HAL (Hyperspace Analog to Language). HAL measures corelation among words through a certain size of context (Lund and Burgess 1996). The similarity measurement between a word pair is cosine similarity based on the vector space model, which reduces distortion of space between high frequency words and low frequency words (Salton et al. 1975, Widdows 2004). We use PCA (Principal Component Analysis) and SVD (Singular Value Decomposition) to reduce a large amount of dimensions caused by similarity matrix. For sense similarity clustering, we adopt supervised and non-supervised learning methods. For non-supervised method, we use clustering. For supervised method, we use SVM (Support Vector Machine), Naive Bayes Classifier, and Maximum Entropy Method.

  • PDF

Adoption Process of Chinese Students to Korean Food Based on Residence periods (중국유학생의 거주기간에 따른 한국음식 확산)

  • Han, Kyung Soo;Min, Ji Eun
    • Journal of the Korean Society of Food Culture
    • /
    • v.34 no.4
    • /
    • pp.411-423
    • /
    • 2019
  • This study conducted qualitative research through in-depth interviews on the diffusion process of Korean food in accordance with the length of residence of Chinese students in Korea. As the analysis method, the qualitative analysis method such as NVivo12.0 was used, and the analysis was performed through word frequency, word cloud, and coding matrix. The images of taste of Korean food were revealed as 'Spicy', 'Sweet', and 'Light taste' by both short-term Chinese residents and long-term Chinese residents. The long-term Chinese residents showed their dissatisfaction with the institutional food, and they also pointed out the importance of the matter of cooking for quick adaptation to life in Korea. The long-term Chinese residents also thought of Korean food as 'High-fiber food', and this was influenced by the 'Less-oil' cooking method of Korean food, which is different from the cooking method of Chinese food. The length of residence was used as a main variable of this study, and it was one of the factors having positive effects on the diffusion of Korean food and acculturation. It would be difficult to generalize the results because this study used the convenience sampling method and snowball sampling.

Comparative analysis of informationattributes inchemical accident response systems through Unstructured Data: Spotlighting on the OECD Guidelines for Chemical Accident Prevention, Preparedness, and Response (비정형 데이터를 이용한 화학물질 사고 대응 체계 정보속성 비교 분석 : 화학사고 예방, 대비 및 대응을 위한 OECD 지침서를 중심으로)

  • YongJin Kim;Chunghyun Do
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.91-110
    • /
    • 2023
  • The importance of manuals is emphasized because chemical accidents require swift response and recovery, and often result in environmental pollution and casualties. In this regard, the OECD revised OECD Guidelines for the Prevention, Preparedness, and Response to Chemical Accidents (referred to as the OECD Guidelines), in June 2023. Moreover, while existing research primarily raises awareness about chemical accidents, highlighting the need for a system-wide response including laws, regulations, and manuals, it was difficult to find comparative research on the attributes of manuals. So, this paper aims to compare and analyze the second and third editions of the OECD Guidelines, in order to uncover the information attributes and implications of the revised version. Specifically, TF-IDF (Term Frequency-Inverse Document Frequency) was applied to understand which keywords have become more important, and Word2Vec was applied to identify keywords that were used similarly and those that were differentiated. Lastly, a 2×2 matrix was proposed, identifying the topics within each quadrant to provide a deeper comparison of the information attributes of the OECD Guidelines. This study offers a framework to help researchers understand information attributes. From a practical perspective, it appears valuable for the revision of standard manuals by domestic government agencies and corporations related to chemistry.

Analysis of ICT Education Trends using Keyword Occurrence Frequency Analysis and CONCOR Technique (키워드 출현 빈도 분석과 CONCOR 기법을 이용한 ICT 교육 동향 분석)

  • Youngseok Lee
    • Journal of Industrial Convergence
    • /
    • v.21 no.1
    • /
    • pp.187-192
    • /
    • 2023
  • In this study, trends in ICT education were investigated by analyzing the frequency of appearance of keywords related to machine learning and using conversion of iteration correction(CONCOR) techniques. A total of 304 papers from 2018 to the present published in registered sites were searched on Google Scalar using "ICT education" as the keyword, and 60 papers pertaining to ICT education were selected based on a systematic literature review. Subsequently, keywords were extracted based on the title and summary of the paper. For word frequency and indicator data, 49 keywords with high appearance frequency were extracted by analyzing frequency, via the term frequency-inverse document frequency technique in natural language processing, and words with simultaneous appearance frequency. The relationship degree was verified by analyzing the connection structure and centrality of the connection degree between words, and a cluster composed of words with similarity was derived via CONCOR analysis. First, "education," "research," "result," "utilization," and "analysis" were analyzed as main keywords. Second, by analyzing an N-GRAM network graph with "education" as the keyword, "curriculum" and "utilization" were shown to exhibit the highest correlation level. Third, by conducting a cluster analysis with "education" as the keyword, five groups were formed: "curriculum," "programming," "student," "improvement," and "information." These results indicate that practical research necessary for ICT education can be conducted by analyzing ICT education trends and identifying trends.

An Analysis of the Current State of Marine Sports through the Analysis of Social Big Data: Use of the Social MaxtixTM Method (소셜 빅 데이터분석을 통한 해양스포츠 현황 분석 : 소셜매트릭스TM 기법의 활용)

  • PARK, Tae-Seung
    • Journal of Fisheries and Marine Sciences Education
    • /
    • v.29 no.2
    • /
    • pp.593-606
    • /
    • 2017
  • This study aims to provide preliminary data capable of suggesting directivity of an initiating start by understanding consumer awareness through analysis of SNS social big data on marine sports. This study selected windsurfing, yacht, jet ski, scuba diving and sea fishing as research subjects, and produced following results by setting period of total 1 month from January 22 through February 22, 2017 on the SNS (twitter, blog) through the Social MatrixTM service of Daumsoft Co., Ltd., and analyzing frequency of mention, associated words etc. First, sports that was mentioned the most out of marine sports was yacht, which was 3,273 cases on twitter and 2,199 on blog respectively. Second, the word which was shown the most associated with marine sports was the attribute showing unique characteristic of marine sports, which was 6,261 cases in total.

A Study on the Perception of Fashion Platforms and Fashion Smart Factories using Big Data Analysis (빅데이터 분석을 이용한 패션 플랫폼과 패션 스마트 팩토리에 대한 인식 연구)

  • Song, Eun-young
    • Fashion & Textile Research Journal
    • /
    • v.23 no.6
    • /
    • pp.799-809
    • /
    • 2021
  • This study aimed to grasp the perceptions and trends in fashion platforms and fashion smart factories using big data analysis. As a research method, big data analysis, fashion platform, and smart factory were identified through literature and prior studies, and text mining analysis and network analysis were performed after collecting text from the web environment between April 2019 and April 2021. After data purification with Textom, the words of fashion platform (1,0591 pieces) and fashion smart factory (9750 pieces) were used for analysis. Key words were derived, the frequency of appearance was calculated, and the results were visualized in word cloud and N-gram. The top 70 words by frequency of appearance were used to generate a matrix, structural equivalence analysis was performed, and the results were displayed using network visualization and dendrograms. The collected data revealed that smart factory had high social issues, but consumer interest and academic research were insufficient, and the amount and frequency of related words on the fashion platform were both high. As a result of structural equalization analysis, it was found that fashion platforms with strong connectivity between clusters are creating new competitiveness with service platforms that add sharing, manufacturing, and curation functions, and fashion smart factories can expect future value to grow together, according to digital technology innovation and platforms. This study can serve as a foundation for future research topics related to fashion platforms and smart factories.

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.

Analysis of the Yearbook from the Korea Meteorological Administration using a text-mining agorithm (텍스트 마이닝 알고리즘을 이용한 기상청 기상연감 자료 분석)

  • Sun, Hyunseok;Lim, Changwon;Lee, YungSeop
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.4
    • /
    • pp.603-613
    • /
    • 2017
  • Many people have recently posted about personal interests on social media. The development of the Internet and computer technology has enabled the storage of digital forms of documents that has resulted in an explosion of the amount of textual data generated; subsequently there is an increased demand for technology to create valuable information from a large number of documents. A text mining technique is often used since text-based data is mostly composed of unstructured forms that are not suitable for the application of statistical analysis or data mining techniques. This study analyzed the Meteorological Yearbook data of the Korea Meteorological Administration (KMA) with a text mining technique. First, a term dictionary was constructed through preprocessing and a term-document matrix was generated. This term dictionary was then used to calculate the annual frequency of term, and observe the change in relative frequency for frequently appearing words. We also used regression analysis to identify terms with increasing and decreasing trends. We analyzed the trends in the Meteorological Yearbook of the KMA and analyzed trends of weather related news, weather status, and status of work trends that the KMA focused on. This study is to provide useful information that can help analyze and improve the meteorological services and reflect meteorological policy.

Financial Fraud Detection using Text Mining Analysis against Municipal Cybercriminality (지자체 사이버 공간 안전을 위한 금융사기 탐지 텍스트 마이닝 방법)

  • Choi, Sukjae;Lee, Jungwon;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.119-138
    • /
    • 2017
  • Recently, SNS has become an important channel for marketing as well as personal communication. However, cybercrime has also evolved with the development of information and communication technology, and illegal advertising is distributed to SNS in large quantity. As a result, personal information is lost and even monetary damages occur more frequently. In this study, we propose a method to analyze which sentences and documents, which have been sent to the SNS, are related to financial fraud. First of all, as a conceptual framework, we developed a matrix of conceptual characteristics of cybercriminality on SNS and emergency management. We also suggested emergency management process which consists of Pre-Cybercriminality (e.g. risk identification) and Post-Cybercriminality steps. Among those we focused on risk identification in this paper. The main process consists of data collection, preprocessing and analysis. First, we selected two words 'daechul(loan)' and 'sachae(private loan)' as seed words and collected data with this word from SNS such as twitter. The collected data are given to the two researchers to decide whether they are related to the cybercriminality, particularly financial fraud, or not. Then we selected some of them as keywords if the vocabularies are related to the nominals and symbols. With the selected keywords, we searched and collected data from web materials such as twitter, news, blog, and more than 820,000 articles collected. The collected articles were refined through preprocessing and made into learning data. The preprocessing process is divided into performing morphological analysis step, removing stop words step, and selecting valid part-of-speech step. In the morphological analysis step, a complex sentence is transformed into some morpheme units to enable mechanical analysis. In the removing stop words step, non-lexical elements such as numbers, punctuation marks, and double spaces are removed from the text. In the step of selecting valid part-of-speech, only two kinds of nouns and symbols are considered. Since nouns could refer to things, the intent of message is expressed better than the other part-of-speech. Moreover, the more illegal the text is, the more frequently symbols are used. The selected data is given 'legal' or 'illegal'. To make the selected data as learning data through the preprocessing process, it is necessary to classify whether each data is legitimate or not. The processed data is then converted into Corpus type and Document-Term Matrix. Finally, the two types of 'legal' and 'illegal' files were mixed and randomly divided into learning data set and test data set. In this study, we set the learning data as 70% and the test data as 30%. SVM was used as the discrimination algorithm. Since SVM requires gamma and cost values as the main parameters, we set gamma as 0.5 and cost as 10, based on the optimal value function. The cost is set higher than general cases. To show the feasibility of the idea proposed in this paper, we compared the proposed method with MLE (Maximum Likelihood Estimation), Term Frequency, and Collective Intelligence method. Overall accuracy and was used as the metric. As a result, the overall accuracy of the proposed method was 92.41% of illegal loan advertisement and 77.75% of illegal visit sales, which is apparently superior to that of the Term Frequency, MLE, etc. Hence, the result suggests that the proposed method is valid and usable practically. In this paper, we propose a framework for crisis management caused by abnormalities of unstructured data sources such as SNS. We hope this study will contribute to the academia by identifying what to consider when applying the SVM-like discrimination algorithm to text analysis. Moreover, the study will also contribute to the practitioners in the field of brand management and opinion mining.

Social Perception of the Invention Education Center as seen in Big Data (빅데이터 분석을 통한 발명 교육 센터에 대한 사회적 인식)

  • Lee, Eun-Sang
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.1
    • /
    • pp.71-80
    • /
    • 2022
  • The purpose of this study is to analyze the social perception of invention education center using big data analysis method. For this purpose, data from January 2014 to September 2021 were collected using the Textom website as a keyword searched for 'invention+education+center' in blogs, cafes, and news channels of NAVER and DAUM website. The collected data was refined using the Textom website, and text mining analysis and semantic network analysis were performed by the Textom website, Ucinet 6, and Netdraw programs. The collected data were subjected to a primary and secondary refinement process and 60 keywords were selected based on the word frequency. The selected key words were converted into matrix data and analyzed by semantic network analysis. As a result of text mining analysis, it was confirmed that 'student', 'operation', 'Korea Invention Promotion Association', and 'Korean Intellectual Property Office' were the meaningful keywords. As a result of semantic network analysis, five clusters could be identified: 'educational operation', 'invention contest', 'education process and progress', 'recruitment and support for business', and 'supervision and selection institution'. Through this study, it was possible to confirm various meaningful social perceptions of the general public in relation to invention education center on the internet. The results of this study will be used as basic data that provides meaningful implications for researchers and policy makers studying for invention education.