• Title/Summary/Keyword: Frequency based Text Analysis

Search Result 238, Processing Time 0.028 seconds

Study on the Viewers' Perception of Investigative Journalism Before and After Pandemic Using Big Data (빅데이터를 활용한 팬데믹 전후 탐사보도프로그램에 대한 시청자 인식연구)

  • Kyunghee Kim;Soonchul Kwon;Seunghyun Lee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.311-320
    • /
    • 2023
  • This paper analyzes viewers' perception of investigative journalism before and after COVID-19, and examines the direction of investigative journalism using big data. Based on the previous research set as a social science model, the relationship between words related to big data TV current affairs programs and investigative journalism in this paper was investigated before and after the appearance of COVID-19. We visualized changes in viewers' perception of investigative journalism by analyzing text data obtained through the use of Textom, with TV current affairs programs and investigative journalism as keywords. Data was collected from 2017 to June 2022 and refined for analysis. We visualized connectivity centrality using Ucinet 6.0 and Netdraw, and clustered the number of keywords and their frequency using Concor analysis. Our study found a clear change in viewer perception before and after the pandemic. As an implication of this thesis, big data analysis was conducted with the investigative journalism as the main keyword, and the direction of the investigative journalism was presented based on the analysis. Furthermore, based on previous research, we suggest effective approaches for investigative journalism after the pandemic to better engage viewers.

Multi-Dimensional Analysis Method of Product Reviews for Market Insight (마켓 인사이트를 위한 상품 리뷰의 다차원 분석 방안)

  • Park, Jeong Hyun;Lee, Seo Ho;Lim, Gyu Jin;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.57-78
    • /
    • 2020
  • With the development of the Internet, consumers have had an opportunity to check product information easily through E-Commerce. Product reviews used in the process of purchasing goods are based on user experience, allowing consumers to engage as producers of information as well as refer to information. This can be a way to increase the efficiency of purchasing decisions from the perspective of consumers, and from the seller's point of view, it can help develop products and strengthen their competitiveness. However, it takes a lot of time and effort to understand the overall assessment and assessment dimensions of the products that I think are important in reading the vast amount of product reviews offered by E-Commerce for the products consumers want to compare. This is because product reviews are unstructured information and it is difficult to read sentiment of reviews and assessment dimension immediately. For example, consumers who want to purchase a laptop would like to check the assessment of comparative products at each dimension, such as performance, weight, delivery, speed, and design. Therefore, in this paper, we would like to propose a method to automatically generate multi-dimensional product assessment scores in product reviews that we would like to compare. The methods presented in this study consist largely of two phases. One is the pre-preparation phase and the second is the individual product scoring phase. In the pre-preparation phase, a dimensioned classification model and a sentiment analysis model are created based on a review of the large category product group review. By combining word embedding and association analysis, the dimensioned classification model complements the limitation that word embedding methods for finding relevance between dimensions and words in existing studies see only the distance of words in sentences. Sentiment analysis models generate CNN models by organizing learning data tagged with positives and negatives on a phrase unit for accurate polarity detection. Through this, the individual product scoring phase applies the models pre-prepared for the phrase unit review. Multi-dimensional assessment scores can be obtained by aggregating them by assessment dimension according to the proportion of reviews organized like this, which are grouped among those that are judged to describe a specific dimension for each phrase. In the experiment of this paper, approximately 260,000 reviews of the large category product group are collected to form a dimensioned classification model and a sentiment analysis model. In addition, reviews of the laptops of S and L companies selling at E-Commerce are collected and used as experimental data, respectively. The dimensioned classification model classified individual product reviews broken down into phrases into six assessment dimensions and combined the existing word embedding method with an association analysis indicating frequency between words and dimensions. As a result of combining word embedding and association analysis, the accuracy of the model increased by 13.7%. The sentiment analysis models could be seen to closely analyze the assessment when they were taught in a phrase unit rather than in sentences. As a result, it was confirmed that the accuracy was 29.4% higher than the sentence-based model. Through this study, both sellers and consumers can expect efficient decision making in purchasing and product development, given that they can make multi-dimensional comparisons of products. In addition, text reviews, which are unstructured data, were transformed into objective values such as frequency and morpheme, and they were analysed together using word embedding and association analysis to improve the objectivity aspects of more precise multi-dimensional analysis and research. This will be an attractive analysis model in terms of not only enabling more effective service deployment during the evolving E-Commerce market and fierce competition, but also satisfying both customers.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

A Study on Differences of Contents and Tones of Arguments among Newspapers Using Text Mining Analysis (텍스트 마이닝을 활용한 신문사에 따른 내용 및 논조 차이점 분석)

  • Kam, Miah;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.53-77
    • /
    • 2012
  • This study analyses the difference of contents and tones of arguments among three Korean major newspapers, the Kyunghyang Shinmoon, the HanKyoreh, and the Dong-A Ilbo. It is commonly accepted that newspapers in Korea explicitly deliver their own tone of arguments when they talk about some sensitive issues and topics. It could be controversial if readers of newspapers read the news without being aware of the type of tones of arguments because the contents and the tones of arguments can affect readers easily. Thus it is very desirable to have a new tool that can inform the readers of what tone of argument a newspaper has. This study presents the results of clustering and classification techniques as part of text mining analysis. We focus on six main subjects such as Culture, Politics, International, Editorial-opinion, Eco-business and National issues in newspapers, and attempt to identify differences and similarities among the newspapers. The basic unit of text mining analysis is a paragraph of news articles. This study uses a keyword-network analysis tool and visualizes relationships among keywords to make it easier to see the differences. Newspaper articles were gathered from KINDS, the Korean integrated news database system. KINDS preserves news articles of the Kyunghyang Shinmun, the HanKyoreh and the Dong-A Ilbo and these are open to the public. This study used these three Korean major newspapers from KINDS. About 3,030 articles from 2008 to 2012 were used. International, national issues and politics sections were gathered with some specific issues. The International section was collected with the keyword of 'Nuclear weapon of North Korea.' The National issues section was collected with the keyword of '4-major-river.' The Politics section was collected with the keyword of 'Tonghap-Jinbo Dang.' All of the articles from April 2012 to May 2012 of Eco-business, Culture and Editorial-opinion sections were also collected. All of the collected data were handled and edited into paragraphs. We got rid of stop-words using the Lucene Korean Module. We calculated keyword co-occurrence counts from the paired co-occurrence list of keywords in a paragraph. We made a co-occurrence matrix from the list. Once the co-occurrence matrix was built, we used the Cosine coefficient matrix as input for PFNet(Pathfinder Network). In order to analyze these three newspapers and find out the significant keywords in each paper, we analyzed the list of 10 highest frequency keywords and keyword-networks of 20 highest ranking frequency keywords to closely examine the relationships and show the detailed network map among keywords. We used NodeXL software to visualize the PFNet. After drawing all the networks, we compared the results with the classification results. Classification was firstly handled to identify how the tone of argument of a newspaper is different from others. Then, to analyze tones of arguments, all the paragraphs were divided into two types of tones, Positive tone and Negative tone. To identify and classify all of the tones of paragraphs and articles we had collected, supervised learning technique was used. The Na$\ddot{i}$ve Bayesian classifier algorithm provided in the MALLET package was used to classify all the paragraphs in articles. After classification, Precision, Recall and F-value were used to evaluate the results of classification. Based on the results of this study, three subjects such as Culture, Eco-business and Politics showed some differences in contents and tones of arguments among these three newspapers. In addition, for the National issues, tones of arguments on 4-major-rivers project were different from each other. It seems three newspapers have their own specific tone of argument in those sections. And keyword-networks showed different shapes with each other in the same period in the same section. It means that frequently appeared keywords in articles are different and their contents are comprised with different keywords. And the Positive-Negative classification showed the possibility of classifying newspapers' tones of arguments compared to others. These results indicate that the approach in this study is promising to be extended as a new tool to identify the different tones of arguments of newspapers.

Comparative Analysis of Low Fertility Response Policies (Focusing on Unstructured Data on Parental Leave and Child Allowance) (저출산 대응 정책 비교분석 (육아휴직과 아동수당의 비정형 데이터 중심으로))

  • Eun-Young Keum;Do-Hee Kim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.769-778
    • /
    • 2023
  • This study compared and analyzed parental leave and child allowance, two major policies among solutions to the current serious low fertility rate problem, using unstructured data, and sought future directions and implications for related response policies based on this. The collection keywords were "low fertility + parental leave" and "low fertility + child allowance", and data analysis was conducted in the following order: text frequency analysis, centrality analysis, network visualization, and CONCOR analysis. As a result of the analysis, first, parental leave was found to be a realistic and practical policy in response to low fertility rates, as data analysis showed more diverse and systematic discussions than child allowance. Second, in terms of child allowance, data analysis showed that there was a high level of information and interest in the cash grant benefit system, including child allowance, but there were no other unique features or active discussions. As a future improvement plan, both policies need to utilize the existing system. First, parental leave requires improvement in the working environment and blind spots in order to expand the system, and second, child allowance requires a change in the form of payment that deviates from the uniform and biased system. should be sought, and it was proposed to expand the target age.

Diagnosis and Evaluation of Humanities Therapy: The Phonetic Analysis of Speech Rates and Fundamental Frequency According to Preferred Sensation Type (인문치료의 진단 및 평가: 감각유형에 따른 말속도와 기본주파수의 실험음성학적 분석)

  • Lee, Chan-Jong;Heo, Yun-Ju
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.4
    • /
    • pp.231-237
    • /
    • 2011
  • The purpose of this study is to examine the correlation between the preferred sensation type and speech sounds, especially on $F_0$ and the speech rates. Data for the sensation types and speech sounds were collected from 36 undergraduate and graduate students (17 male, 19 female). Subjects were asked to read a given text (400 syllables), describe a drawing, and give answers to some questions. We measured speakers' $F_0$ and speech rates. The results show that type V (Visual) has the correlation with the speech rates when type D (Digital) was ruled out, and type A (Auditory) has the correlation with the speech rates when type D was included. Furthermore, the analysis of the mean values of V, A, K (Visual, Auditory, Kinethetic) indicates that type V is characterized with faster speech rates and higher $F_0$ in all parts except for interview and the same is true for that of V, A, K, D (Visual, Auditory, Kinethetic, Digital) in all parts. In conclusion, this study proved that the preferred sensation type has the correlation with $F_0$ and speech rates. Based on the results of this study, $F_0$ and speech rates can be used to analyze the sensation types for individualized education as well as consultation. In addition, this study has great significance in that it lays a foundation for the study on the correlation between a preferred sensation type and speech sounds.

Developing an Intelligent System for the Analysis of Signs Of Disaster (인적재난사고사례기반의 새로운 재난전조정보 등급판정 연구)

  • Lee, Young Jai
    • Journal of Korean Society of societal Security
    • /
    • v.4 no.2
    • /
    • pp.29-40
    • /
    • 2011
  • The objective of this paper is to develop an intelligent decision support system that is able to advise disaster countermeasures and degree of incidents on the basis of the collected and analyzed signs of disasters. The concepts derived from ontology, text mining and case-based reasoning are adapted to design the system. The functions of this system include term-document matrix, frequency normalization, confidency, association rules, and criteria for judgment. The collected qualitative data from signs of new incidents are processed by those functions and are finally compared and reasoned to past similar disaster cases. The system provides the varying degrees of how dangerous the new signs of disasters are and the few countermeasures to the disaster for the manager of disaster management. The system will be helpful for the decision-maker to make a judgment about how much dangerous the signs of disaster are and to carry out specific kinds of countermeasures on the disaster in advance. As a result, the disaster will be prevented.

  • PDF

Differences in Metacognitive Awareness of Reading Strategy Use in English Test-typed Text Reading between Gifted English Language Learners and General Middle School Learners (영어 평가 지문 읽기에서 영어 영재 학생과 일반 중학생의 메타인지 읽기전략 사용 차이에 대한 연구)

  • Bang, Jyun
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.1
    • /
    • pp.345-355
    • /
    • 2020
  • The purpose of this study was to explore the differences of the metacognitive awareness of reading strategies which gifted English language learners (GELLs) and general middle school learners (GMSLs) used while reading English test-typed texts. 74 GELLs in a gifted program of C city and 90 GMSLs in the southern part of C city participated in this study. The MARSI questionnaire was administered to the GELLs and GMSLs at the end of the semester. Frequency and t-test were used to examine the differences in metacognitive awareness of reading strategy use between GELLs and GSMLs when reading the English test-typed texts. Based on the analysis, the study discovered that GELLs were likely to use metacognitive awareness of reading strategies more frequently than GMSLs. Also, GELLs tended to use more global and problem-solving strategies than GMSLs. However, there is no significant difference in support strategy use between the two groups. In conclusion, the study suggests pedagogical implications for GELLs and GMSLs' effective English reading.

Analysis of Digital Divide in Transportation Section (교통부문 디지털 격차 현황 분석)

  • Ah-hae Cho;Jihun Seo;Jungwoo Cho;Sunghoon Kim;Youngho Kim
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.4
    • /
    • pp.145-166
    • /
    • 2023
  • The ongoing COVID-19 pandemic has led to a widespread shift towards non-face-to-face/uncrewed services in various sectors of society. Despite this, research on the digital divide has focused predominantly on analyzing various factors, with the notable absence of studies addressing the digital divide. Therefore, study examined the current digital divide in the transportation sector through a survey-based approach. First, a nationwide survey was conducted among adult men and women to assess their digital device usage. Vulnerable groups sesceptible to digital disparities were identified based on factors such as age, education, and income. Second, comparative analysis was conducted to examine the usage patterns of mobile applications related to the transportation sector among the vulnerable and non-vulnerable groups using chi-squared test. These findings suggest that the vulnerable group exhibited lower awareness and preference for mobile applications, a significantly lower frequency of application usage than the non-vulnerable group. Finally, a comparison of the proficiency in utilizing transportation sector mobile applications was conducted, showing that the vulnerable group demonstrated a significantly lower level of proficiency across all aspects of application usage procedures compared to the non-vulnerable group. These survey results provide a valuable foundation for future policy formulation to reduce the digital divide in the transportation sector. By highlighting the current state of digital disparities, the research contributes to developing evidence-based strategies to enhance inclusivity and equal access to digital services in tjwtransportation.

Web Site Keyword Selection Method by Considering Semantic Similarity Based on Word2Vec (Word2Vec 기반의 의미적 유사도를 고려한 웹사이트 키워드 선택 기법)

  • Lee, Donghun;Kim, Kwanho
    • The Journal of Society for e-Business Studies
    • /
    • v.23 no.2
    • /
    • pp.83-96
    • /
    • 2018
  • Extracting keywords representing documents is very important because it can be used for automated services such as document search, classification, recommendation system as well as quickly transmitting document information. However, when extracting keywords based on the frequency of words appearing in a web site documents and graph algorithms based on the co-occurrence of words, the problem of containing various words that are not related to the topic potentially in the web page structure, There is a difficulty in extracting the semantic keyword due to the limit of the performance of the Korean tokenizer. In this paper, we propose a method to select candidate keywords based on semantic similarity, and solve the problem that semantic keyword can not be extracted and the accuracy of Korean tokenizer analysis is poor. Finally, we use the technique of extracting final semantic keywords through filtering process to remove inconsistent keywords. Experimental results through real web pages of small business show that the performance of the proposed method is improved by 34.52% over the statistical similarity based keyword selection technique. Therefore, it is confirmed that the performance of extracting keywords from documents is improved by considering semantic similarity between words and removing inconsistent keywords.