• Title/Summary/Keyword: Learning approach

Search Result 3,051, Processing Time 0.034 seconds

Design Evaluation Model Based on Consumer Values: Three-step Approach from Product Attributes, Perceived Attributes, to Consumer Values (소비자 가치기반 디자인 평가 모형: 제품 속성, 인지 속성, 소비자 가치의 3단계 접근)

  • Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.57-76
    • /
    • 2017
  • Recently, consumer needs are diversifying as information technologies are evolving rapidly. A lot of IT devices such as smart phones and tablet PCs are launching following the trend of information technology. While IT devices focused on the technical advance and improvement a few years ago, the situation is changed now. There is no difference in functional aspects, so companies are trying to differentiate IT devices in terms of appearance design. Consumers also consider design as being a more important factor in the decision-making of smart phones. Smart phones have become a fashion items, revealing consumers' own characteristics and personality. As the design and appearance of the smartphone become important things, it is necessary to examine consumer values from the design and appearance of IT devices. Furthermore, it is crucial to clarify the mechanisms of consumers' design evaluation and develop the design evaluation model based on the mechanism. Since the influence of design gets continuously strong, various and many studies related to design were carried out. These studies can classify three main streams. The first stream focuses on the role of design from the perspective of marketing and communication. The second one is the studies to find out an effective and appealing design from the perspective of industrial design. The last one is to examine the consumer values created by a product design, which means consumers' perception or feeling when they look and feel it. These numerous studies somewhat have dealt with consumer values, but they do not include product attributes, or do not cover the whole process and mechanism from product attributes to consumer values. In this study, we try to develop the holistic design evaluation model based on consumer values based on three-step approach from product attributes, perceived attributes, to consumer values. Product attributes means the real and physical characteristics each smart phone has. They consist of bezel, length, width, thickness, weight and curvature. Perceived attributes are derived from consumers' perception on product attributes. We consider perceived size of device, perceived size of display, perceived thickness, perceived weight, perceived bezel (top - bottom / left - right side), perceived curvature of edge, perceived curvature of back side, gap of each part, perceived gloss and perceived screen ratio. They are factorized into six clusters named as 'Size,' 'Slimness,' 'No-Frame,' 'Roundness,' 'Screen Ratio,' and 'Looseness.' We conducted qualitative research to find out consumer values, which are categorized into two: look and feel values. We identified the values named as 'Silhouette,' 'Neatness,' 'Attractiveness,' 'Polishing,' 'Innovativeness,' 'Professionalism,' 'Intellectualness,' 'Individuality,' and 'Distinctiveness' in terms of look values. Also, we identifies 'Stability,' 'Comfortableness,' 'Grip,' 'Solidity,' 'Non-fragility,' and 'Smoothness' in terms of feel values. They are factorized into five key values: 'Sleek Value,' 'Professional Value,' 'Unique Value,' 'Comfortable Value,' and 'Solid Value.' Finally, we developed the holistic design evaluation model by analyzing each relationship from product attributes, perceived attributes, to consumer values. This study has several theoretical and practical contributions. First, we found consumer values in terms of design evaluation and implicit chain relationship from the objective and physical characteristics to the subjective and mental evaluation. That is, the model explains the mechanism of design evaluation in consumer minds. Second, we suggest a general design evaluation process from product attributes, perceived attributes to consumer values. It is an adaptable methodology not only smart phone but also other IT products. Practically, this model can support the decision-making when companies initiative new product development. It can help product designers focus on their capacities with limited resources. Moreover, if its model combined with machine learning collecting consumers' purchasing data, most preferred values, sales data, etc., it will be able to evolve intelligent design decision support system.

A Study on the Development of Web-based STS Instruction Model for the Scientifically Gifted Students- Centered on Biology Education - (과학영재교육을 위한 웹기반 STS수업모형 개발-생물교육을 중심으로-)

  • Lim, Gil-Sun;Jeong, Wan-Ho
    • Journal of The Korean Association For Science Education
    • /
    • v.24 no.5
    • /
    • pp.851-868
    • /
    • 2004
  • The main purposes of this study is to develop a web-based STS biology instruction program (WB-STS) for the scientifically gifted students. The specific main research questions were as follows; 1. How can the WB-STS for biology education be developed and what are the primary components involved in it? 2. Is there any proper validity for developed the WB-STS in biology education? To solve the above mentioned problems, several procedures were applied. First, in order to develop WB-STS for the scientifically gifted students, NCISE, Renzulli' s Enrichment Triad Model and the Iowa Chautauqua program's main characteristics were analyzed systematically and the principles and general process for constructing WB-STS were examined. Additionally, the needs of students and the goals of Biology education were identified thoroughly. And then all these ideas were embodied in an agenda for constructing WB-STS. Second, to analyse the validity and utility of developing WB-STS, a questionnaire was developed and submitted to seven specialists and a group of twenty students who would participate in the experiment later. The main results of study are summarized below: First, WB-STS appeared to be successfully constructed based on Renzulli' s Enrichment Triad Model and the Iowa Chautauqua program. Its main features are that it was made emphasizing a learner-centered approach and constructive learning. It is composed of five steps: Scientific theme selection -${\rightarrow}$Exploration ${\rightarrow}$ Concept & Principle Check ${\rightarrow}$ Finding Solution ${\rightarrow}$ Action. Second, seven specialists and a group of students assessed the developed WB-STS's validity and utility with a questionnaire, the results appeared satisfactory. Students showed high interest in WB-STS and gave a positive evaluation of WB-STS.

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

Predicting stock movements based on financial news with systematic group identification (시스템적인 군집 확인과 뉴스를 이용한 주가 예측)

  • Seong, NohYoon;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.1-17
    • /
    • 2019
  • Because stock price forecasting is an important issue both academically and practically, research in stock price prediction has been actively conducted. The stock price forecasting research is classified into using structured data and using unstructured data. With structured data such as historical stock price and financial statements, past studies usually used technical analysis approach and fundamental analysis. In the big data era, the amount of information has rapidly increased, and the artificial intelligence methodology that can find meaning by quantifying string information, which is an unstructured data that takes up a large amount of information, has developed rapidly. With these developments, many attempts with unstructured data are being made to predict stock prices through online news by applying text mining to stock price forecasts. The stock price prediction methodology adopted in many papers is to forecast stock prices with the news of the target companies to be forecasted. However, according to previous research, not only news of a target company affects its stock price, but news of companies that are related to the company can also affect the stock price. However, finding a highly relevant company is not easy because of the market-wide impact and random signs. Thus, existing studies have found highly relevant companies based primarily on pre-determined international industry classification standards. However, according to recent research, global industry classification standard has different homogeneity within the sectors, and it leads to a limitation that forecasting stock prices by taking them all together without considering only relevant companies can adversely affect predictive performance. To overcome the limitation, we first used random matrix theory with text mining for stock prediction. Wherever the dimension of data is large, the classical limit theorems are no longer suitable, because the statistical efficiency will be reduced. Therefore, a simple correlation analysis in the financial market does not mean the true correlation. To solve the issue, we adopt random matrix theory, which is mainly used in econophysics, to remove market-wide effects and random signals and find a true correlation between companies. With the true correlation, we perform cluster analysis to find relevant companies. Also, based on the clustering analysis, we used multiple kernel learning algorithm, which is an ensemble of support vector machine to incorporate the effects of the target firm and its relevant firms simultaneously. Each kernel was assigned to predict stock prices with features of financial news of the target firm and its relevant firms. The results of this study are as follows. The results of this paper are as follows. (1) Following the existing research flow, we confirmed that it is an effective way to forecast stock prices using news from relevant companies. (2) When looking for a relevant company, looking for it in the wrong way can lower AI prediction performance. (3) The proposed approach with random matrix theory shows better performance than previous studies if cluster analysis is performed based on the true correlation by removing market-wide effects and random signals. The contribution of this study is as follows. First, this study shows that random matrix theory, which is used mainly in economic physics, can be combined with artificial intelligence to produce good methodologies. This suggests that it is important not only to develop AI algorithms but also to adopt physics theory. This extends the existing research that presented the methodology by integrating artificial intelligence with complex system theory through transfer entropy. Second, this study stressed that finding the right companies in the stock market is an important issue. This suggests that it is not only important to study artificial intelligence algorithms, but how to theoretically adjust the input values. Third, we confirmed that firms classified as Global Industrial Classification Standard (GICS) might have low relevance and suggested it is necessary to theoretically define the relevance rather than simply finding it in the GICS.

A Study on the Relation between Matteo Ricci and Daesoon Thought: A Phenomenological Interpretation of Ricci in Daesoon Thought (마테오 리치와 대순사상의 관계성에 대한 연구 - 대순사상의 기독교 종장에 대한 종교현상학적 해석 -)

  • Ahn, Shin
    • Journal of the Daesoon Academy of Sciences
    • /
    • v.36
    • /
    • pp.117-152
    • /
    • 2020
  • In Daesoon Thought, Matteo Ricci is regarded highly as a Jongjang, 'religious leader,' (of Christianity). This paper deals with the life and philosophical/theological thought of Matteo Ricci as homo-religiosus from the perspective of phenomenology of religion. Examining his historical background and biographical sketch, I will analyze Ricci's understanding of God, humanity, and salvation and re-evaluate his relationship with Daesoon Thought. Matteo Ricci, born in Italy, became a Jesuit missionary to China and transmitted various products of western civilization. Accepting the pro-cultural approach of Jesuit mission, he applied it to Chinese culture and language by learning the Chinese language and regarding Chinese people as his friends. This was a sympathetic way to transmit Western religion and culture while on Chinese soil. He suggested eight reasons to look towards the future of China with optimism and taught Chinese people his Christian message through his indirect means of understanding and persuasion. In China, Jesuit missionaries called the Christian God 'Tianzhu (Cheonju in Sino-Korean),' meaning Lord of Heaven. Ricci identified the Confucian notion of 'Shangdi (Sangje in Sino-Korean),' meaning Supreme Emperor (or God) with Tianzhu. While translating Confucian scriptures, he found the common ground between Confucianism and Christianity to be the monotheism of ancient Confucianism. He criticized the concepts of God in Buddhism and Daoism, and justified the Christian doctrine of God by way of a Confucian understanding of deity. Ricci's understanding of humanity was based on his Christian faith in creation, and he criticized the Buddhist concept of transmigration. He proposed Christian ethics and doctrine of salvation by using discourse on the afterlife and in particular, the concepts of heaven and hell. Concerning the relationship between Daesoon Thought and Ricci, the following aspects should be examined: 1.) Ricci's contribution to the cultural exchanges between East and West, 2.) his peaceful approach to his mission based on dialogue and persuasion, 3.) the various activities conducted by Ricci as a Christian leader, and 4.) his belief in miraculous healings. His influence on Korea will likewise be explored. Ricci's ultimate aim was to communicate with Asian people and unify East and West under a singular worldview by emphasizing the similarities between the Christian and Confucian concepts of God.

Exploring Changes in Science PCK Characteristics through a Family Resemblance Approach (가족유사성 접근을 통한 과학 PCK 변화 탐색)

  • Kwak, Youngsun
    • Journal of the Korean Society of Earth Science Education
    • /
    • v.15 no.2
    • /
    • pp.235-248
    • /
    • 2022
  • With the changes in the future educational environment, such as the rapid decline of the school-age population and the expansion of students' choice of curriculum, changes are also required in PCK, the expertise of science teachers. In other words, the categories constituting the existing 'consensus-PCK' and the characteristics of 'science PCK' are not fixed, so more categories and characteristics can be added. The purpose of this study is to explore the potential area of science PCK required to cope with changes in the future educational environment in the form of 'Family Resemblance Science PCK (Family Resemblance-PCK, hereafter)' through Wittgenstein's family resemblance approach. For this purpose, in-depth interviews were conducted with three focus groups. In the focus group in-depth interview, participants discussed how the science PCK required for science teachers in future schools in 2030-2045 will change due to changes in the future society and educational environment. Qualitative analysis was performed based on the in-depth interview, and semantic network analysis was performed on the in-depth interview text to analyze the characteristics of 'Family Resemblance-PCK' differentiated from the existing 'consensus-PCK'. In results, the characteristics of Family Resemblance-PCK, which are newly requested along with changes in role expectations of science teachers, were examined by PCK area. As a result of semantic network analysis of Family Resemblance-PCK, it was found that Family Resemblance-PCK expands its boundaries from the existing consensus-PCK, which is the starting point, and new PCK elements were added. Looking at the aspects of Family Resemblance-PCK, [AI-Convergence Knowledge-Contents-Digital], [Community-Network-Human Resources-Relationships], [Technology-Exploration-Virtual Reality-Research], [Self-Directed Learning-Collaboration-Community], etc., form a distinct network cluster, and it is expected that future science teacher expertise will be formed and strengthened around these PCK areas. Based on the research results, changes in the professionalism of science teachers in future schools and countermeasures were proposed as a conclusion.

Analysis of the impact of mathematics education research using explainable AI (설명가능한 인공지능을 활용한 수학교육 연구의 영향력 분석)

  • Oh, Se Jun
    • The Mathematical Education
    • /
    • v.62 no.3
    • /
    • pp.435-455
    • /
    • 2023
  • This study primarily focused on the development of an Explainable Artificial Intelligence (XAI) model to discern and analyze papers with significant impact in the field of mathematics education. To achieve this, meta-information from 29 domestic and international mathematics education journals was utilized to construct a comprehensive academic research network in mathematics education. This academic network was built by integrating five sub-networks: 'paper and its citation network', 'paper and author network', 'paper and journal network', 'co-authorship network', and 'author and affiliation network'. The Random Forest machine learning model was employed to evaluate the impact of individual papers within the mathematics education research network. The SHAP, an XAI model, was used to analyze the reasons behind the AI's assessment of impactful papers. Key features identified for determining impactful papers in the field of mathematics education through the XAI included 'paper network PageRank', 'changes in citations per paper', 'total citations', 'changes in the author's h-index', and 'citations per paper of the journal'. It became evident that papers, authors, and journals play significant roles when evaluating individual papers. When analyzing and comparing domestic and international mathematics education research, variations in these discernment patterns were observed. Notably, the significance of 'co-authorship network PageRank' was emphasized in domestic mathematics education research. The XAI model proposed in this study serves as a tool for determining the impact of papers using AI, providing researchers with strategic direction when writing papers. For instance, expanding the paper network, presenting at academic conferences, and activating the author network through co-authorship were identified as major elements enhancing the impact of a paper. Based on these findings, researchers can have a clear understanding of how their work is perceived and evaluated in academia and identify the key factors influencing these evaluations. This study offers a novel approach to evaluating the impact of mathematics education papers using an explainable AI model, traditionally a process that consumed significant time and resources. This approach not only presents a new paradigm that can be applied to evaluations in various academic fields beyond mathematics education but also is expected to substantially enhance the efficiency and effectiveness of research activities.

Strategic Issues in Managing Complexity in NPD Projects (신제품개발 과정의 복잡성에 대한 주요 연구과제)

  • Kim, Jongbae
    • Asia Marketing Journal
    • /
    • v.7 no.3
    • /
    • pp.53-76
    • /
    • 2005
  • With rapid technological and market change, new product development (NPD) complexity is a significant issue that organizations continually face in their development projects. There are numerous factors, which cause development projects to become increasingly costly & complex. A product is more likely to be successfully developed and marketed when the complexity inherent in NPD projects is clearly understood and carefully managed. Based upon the previous studies, this study examines the nature and importance of complexity in developing new products and then identifies several issues in managing complexity. Issues considered include: definition of complexity : consequences of complexity; and methods for managing complexity in NPD projects. To achieve high performance in managing complexity in development projects, these issues need to be addressed, for example: A. Complexity inherent in NPD projects is multi-faceted and multidimensional. What factors need to be considered in defining and/or measuring complexity in a development project? For example, is it sufficient if complexity is defined only from a technological perspective, or is it more desirable to consider the entire array of complexity sources which NPD teams with different functions (e.g., marketing, R&D, manufacturing, etc.) face in the development process? Moreover, is it sufficient if complexity is measured only once during a development project, or is it more effective and useful to trace complexity changes over the entire development life cycle? B. Complexity inherent in a project can have negative as well as positive influences on NPD performance. Thus, which complexity impacts are usually considered negative and which are positive? Project complexity also can affect the entire organization. Any complexity could be better assessed in broader and longer perspective. What are some ways in which the long-term impact of complexity on an organization can be assessed and managed? C. Based upon previous studies, several approaches for managing complexity are derived. What are the weaknesses & strengths of each approach? Is there a desirable hierarchy or order among these approaches when more than one approach is used? Are there differences in the outcomes according to industry and product types (incremental or radical)? Answers to these and other questions can help organizations effectively manage the complexity inherent in most development projects. Complexity is worthy of additional attention from researchers and practitioners alike. Large-scale empirical investigations, jointly conducted by researchers and practitioners, will help gain useful insights into understanding and managing complexity. Those organizations that can accurately identify, assess, and manage the complexity inherent in projects are likely to gain important competitive advantages.

  • PDF

A Study on the Application of Outlier Analysis for Fraud Detection: Focused on Transactions of Auction Exception Agricultural Products (부정 탐지를 위한 이상치 분석 활용방안 연구 : 농수산 상장예외품목 거래를 대상으로)

  • Kim, Dongsung;Kim, Kitae;Kim, Jongwoo;Park, Steve
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.93-108
    • /
    • 2014
  • To support business decision making, interests and efforts to analyze and use transaction data in different perspectives are increasing. Such efforts are not only limited to customer management or marketing, but also used for monitoring and detecting fraud transactions. Fraud transactions are evolving into various patterns by taking advantage of information technology. To reflect the evolution of fraud transactions, there are many efforts on fraud detection methods and advanced application systems in order to improve the accuracy and ease of fraud detection. As a case of fraud detection, this study aims to provide effective fraud detection methods for auction exception agricultural products in the largest Korean agricultural wholesale market. Auction exception products policy exists to complement auction-based trades in agricultural wholesale market. That is, most trades on agricultural products are performed by auction; however, specific products are assigned as auction exception products when total volumes of products are relatively small, the number of wholesalers is small, or there are difficulties for wholesalers to purchase the products. However, auction exception products policy makes several problems on fairness and transparency of transaction, which requires help of fraud detection. In this study, to generate fraud detection rules, real huge agricultural products trade transaction data from 2008 to 2010 in the market are analyzed, which increase more than 1 million transactions and 1 billion US dollar in transaction volume. Agricultural transaction data has unique characteristics such as frequent changes in supply volumes and turbulent time-dependent changes in price. Since this was the first trial to identify fraud transactions in this domain, there was no training data set for supervised learning. So, fraud detection rules are generated using outlier detection approach. We assume that outlier transactions have more possibility of fraud transactions than normal transactions. The outlier transactions are identified to compare daily average unit price, weekly average unit price, and quarterly average unit price of product items. Also quarterly averages unit price of product items of the specific wholesalers are used to identify outlier transactions. The reliability of generated fraud detection rules are confirmed by domain experts. To determine whether a transaction is fraudulent or not, normal distribution and normalized Z-value concept are applied. That is, a unit price of a transaction is transformed to Z-value to calculate the occurrence probability when we approximate the distribution of unit prices to normal distribution. The modified Z-value of the unit price in the transaction is used rather than using the original Z-value of it. The reason is that in the case of auction exception agricultural products, Z-values are influenced by outlier fraud transactions themselves because the number of wholesalers is small. The modified Z-values are called Self-Eliminated Z-scores because they are calculated excluding the unit price of the specific transaction which is subject to check whether it is fraud transaction or not. To show the usefulness of the proposed approach, a prototype of fraud transaction detection system is developed using Delphi. The system consists of five main menus and related submenus. First functionalities of the system is to import transaction databases. Next important functions are to set up fraud detection parameters. By changing fraud detection parameters, system users can control the number of potential fraud transactions. Execution functions provide fraud detection results which are found based on fraud detection parameters. The potential fraud transactions can be viewed on screen or exported as files. The study is an initial trial to identify fraud transactions in Auction Exception Agricultural Products. There are still many remained research topics of the issue. First, the scope of analysis data was limited due to the availability of data. It is necessary to include more data on transactions, wholesalers, and producers to detect fraud transactions more accurately. Next, we need to extend the scope of fraud transaction detection to fishery products. Also there are many possibilities to apply different data mining techniques for fraud detection. For example, time series approach is a potential technique to apply the problem. Even though outlier transactions are detected based on unit prices of transactions, however it is possible to derive fraud detection rules based on transaction volumes.

A Study on Differences of Contents and Tones of Arguments among Newspapers Using Text Mining Analysis (텍스트 마이닝을 활용한 신문사에 따른 내용 및 논조 차이점 분석)

  • Kam, Miah;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.53-77
    • /
    • 2012
  • This study analyses the difference of contents and tones of arguments among three Korean major newspapers, the Kyunghyang Shinmoon, the HanKyoreh, and the Dong-A Ilbo. It is commonly accepted that newspapers in Korea explicitly deliver their own tone of arguments when they talk about some sensitive issues and topics. It could be controversial if readers of newspapers read the news without being aware of the type of tones of arguments because the contents and the tones of arguments can affect readers easily. Thus it is very desirable to have a new tool that can inform the readers of what tone of argument a newspaper has. This study presents the results of clustering and classification techniques as part of text mining analysis. We focus on six main subjects such as Culture, Politics, International, Editorial-opinion, Eco-business and National issues in newspapers, and attempt to identify differences and similarities among the newspapers. The basic unit of text mining analysis is a paragraph of news articles. This study uses a keyword-network analysis tool and visualizes relationships among keywords to make it easier to see the differences. Newspaper articles were gathered from KINDS, the Korean integrated news database system. KINDS preserves news articles of the Kyunghyang Shinmun, the HanKyoreh and the Dong-A Ilbo and these are open to the public. This study used these three Korean major newspapers from KINDS. About 3,030 articles from 2008 to 2012 were used. International, national issues and politics sections were gathered with some specific issues. The International section was collected with the keyword of 'Nuclear weapon of North Korea.' The National issues section was collected with the keyword of '4-major-river.' The Politics section was collected with the keyword of 'Tonghap-Jinbo Dang.' All of the articles from April 2012 to May 2012 of Eco-business, Culture and Editorial-opinion sections were also collected. All of the collected data were handled and edited into paragraphs. We got rid of stop-words using the Lucene Korean Module. We calculated keyword co-occurrence counts from the paired co-occurrence list of keywords in a paragraph. We made a co-occurrence matrix from the list. Once the co-occurrence matrix was built, we used the Cosine coefficient matrix as input for PFNet(Pathfinder Network). In order to analyze these three newspapers and find out the significant keywords in each paper, we analyzed the list of 10 highest frequency keywords and keyword-networks of 20 highest ranking frequency keywords to closely examine the relationships and show the detailed network map among keywords. We used NodeXL software to visualize the PFNet. After drawing all the networks, we compared the results with the classification results. Classification was firstly handled to identify how the tone of argument of a newspaper is different from others. Then, to analyze tones of arguments, all the paragraphs were divided into two types of tones, Positive tone and Negative tone. To identify and classify all of the tones of paragraphs and articles we had collected, supervised learning technique was used. The Na$\ddot{i}$ve Bayesian classifier algorithm provided in the MALLET package was used to classify all the paragraphs in articles. After classification, Precision, Recall and F-value were used to evaluate the results of classification. Based on the results of this study, three subjects such as Culture, Eco-business and Politics showed some differences in contents and tones of arguments among these three newspapers. In addition, for the National issues, tones of arguments on 4-major-rivers project were different from each other. It seems three newspapers have their own specific tone of argument in those sections. And keyword-networks showed different shapes with each other in the same period in the same section. It means that frequently appeared keywords in articles are different and their contents are comprised with different keywords. And the Positive-Negative classification showed the possibility of classifying newspapers' tones of arguments compared to others. These results indicate that the approach in this study is promising to be extended as a new tool to identify the different tones of arguments of newspapers.