• Title/Summary/Keyword: journal articles

Search Result 5,792, Processing Time 0.038 seconds

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Research Trend Analysis Using Bibliographic Information and Citations of Cloud Computing Articles: Application of Social Network Analysis (클라우드 컴퓨팅 관련 논문의 서지정보 및 인용정보를 활용한 연구 동향 분석: 사회 네트워크 분석의 활용)

  • Kim, Dongsung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.195-211
    • /
    • 2014
  • Cloud computing services provide IT resources as services on demand. This is considered a key concept, which will lead a shift from an ownership-based paradigm to a new pay-for-use paradigm, which can reduce the fixed cost for IT resources, and improve flexibility and scalability. As IT services, cloud services have evolved from early similar computing concepts such as network computing, utility computing, server-based computing, and grid computing. So research into cloud computing is highly related to and combined with various relevant computing research areas. To seek promising research issues and topics in cloud computing, it is necessary to understand the research trends in cloud computing more comprehensively. In this study, we collect bibliographic information and citation information for cloud computing related research papers published in major international journals from 1994 to 2012, and analyzes macroscopic trends and network changes to citation relationships among papers and the co-occurrence relationships of key words by utilizing social network analysis measures. Through the analysis, we can identify the relationships and connections among research topics in cloud computing related areas, and highlight new potential research topics. In addition, we visualize dynamic changes of research topics relating to cloud computing using a proposed cloud computing "research trend map." A research trend map visualizes positions of research topics in two-dimensional space. Frequencies of key words (X-axis) and the rates of increase in the degree centrality of key words (Y-axis) are used as the two dimensions of the research trend map. Based on the values of the two dimensions, the two dimensional space of a research map is divided into four areas: maturation, growth, promising, and decline. An area with high keyword frequency, but low rates of increase of degree centrality is defined as a mature technology area; the area where both keyword frequency and the increase rate of degree centrality are high is defined as a growth technology area; the area where the keyword frequency is low, but the rate of increase in the degree centrality is high is defined as a promising technology area; and the area where both keyword frequency and the rate of degree centrality are low is defined as a declining technology area. Based on this method, cloud computing research trend maps make it possible to easily grasp the main research trends in cloud computing, and to explain the evolution of research topics. According to the results of an analysis of citation relationships, research papers on security, distributed processing, and optical networking for cloud computing are on the top based on the page-rank measure. From the analysis of key words in research papers, cloud computing and grid computing showed high centrality in 2009, and key words dealing with main elemental technologies such as data outsourcing, error detection methods, and infrastructure construction showed high centrality in 2010~2011. In 2012, security, virtualization, and resource management showed high centrality. Moreover, it was found that the interest in the technical issues of cloud computing increases gradually. From annual cloud computing research trend maps, it was verified that security is located in the promising area, virtualization has moved from the promising area to the growth area, and grid computing and distributed system has moved to the declining area. The study results indicate that distributed systems and grid computing received a lot of attention as similar computing paradigms in the early stage of cloud computing research. The early stage of cloud computing was a period focused on understanding and investigating cloud computing as an emergent technology, linking to relevant established computing concepts. After the early stage, security and virtualization technologies became main issues in cloud computing, which is reflected in the movement of security and virtualization technologies from the promising area to the growth area in the cloud computing research trend maps. Moreover, this study revealed that current research in cloud computing has rapidly transferred from a focus on technical issues to for a focus on application issues, such as SLAs (Service Level Agreements).

Development of the Accident Prediction Model for Enlisted Men through an Integrated Approach to Datamining and Textmining (데이터 마이닝과 텍스트 마이닝의 통합적 접근을 통한 병사 사고예측 모델 개발)

  • Yoon, Seungjin;Kim, Suhwan;Shin, Kyungshik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.1-17
    • /
    • 2015
  • In this paper, we report what we have observed with regards to a prediction model for the military based on enlisted men's internal(cumulative records) and external data(SNS data). This work is significant in the military's efforts to supervise them. In spite of their effort, many commanders have failed to prevent accidents by their subordinates. One of the important duties of officers' work is to take care of their subordinates in prevention unexpected accidents. However, it is hard to prevent accidents so we must attempt to determine a proper method. Our motivation for presenting this paper is to mate it possible to predict accidents using enlisted men's internal and external data. The biggest issue facing the military is the occurrence of accidents by enlisted men related to maladjustment and the relaxation of military discipline. The core method of preventing accidents by soldiers is to identify problems and manage them quickly. Commanders predict accidents by interviewing their soldiers and observing their surroundings. It requires considerable time and effort and results in a significant difference depending on the capabilities of the commanders. In this paper, we seek to predict accidents with objective data which can easily be obtained. Recently, records of enlisted men as well as SNS communication between commanders and soldiers, make it possible to predict and prevent accidents. This paper concerns the application of data mining to identify their interests, predict accidents and make use of internal and external data (SNS). We propose both a topic analysis and decision tree method. The study is conducted in two steps. First, topic analysis is conducted through the SNS of enlisted men. Second, the decision tree method is used to analyze the internal data with the results of the first analysis. The dependent variable for these analysis is the presence of any accidents. In order to analyze their SNS, we require tools such as text mining and topic analysis. We used SAS Enterprise Miner 12.1, which provides a text miner module. Our approach for finding their interests is composed of three main phases; collecting, topic analysis, and converting topic analysis results into points for using independent variables. In the first phase, we collect enlisted men's SNS data by commender's ID. After gathering unstructured SNS data, the topic analysis phase extracts issues from them. For simplicity, 5 topics(vacation, friends, stress, training, and sports) are extracted from 20,000 articles. In the third phase, using these 5 topics, we quantify them as personal points. After quantifying their topic, we include these results in independent variables which are composed of 15 internal data sets. Then, we make two decision trees. The first tree is composed of their internal data only. The second tree is composed of their external data(SNS) as well as their internal data. After that, we compare the results of misclassification from SAS E-miner. The first model's misclassification is 12.1%. On the other hand, second model's misclassification is 7.8%. This method predicts accidents with an accuracy of approximately 92%. The gap of the two models is 4.3%. Finally, we test if the difference between them is meaningful or not, using the McNemar test. The result of test is considered relevant.(p-value : 0.0003) This study has two limitations. First, the results of the experiments cannot be generalized, mainly because the experiment is limited to a small number of enlisted men's data. Additionally, various independent variables used in the decision tree model are used as categorical variables instead of continuous variables. So it suffers a loss of information. In spite of extensive efforts to provide prediction models for the military, commanders' predictions are accurate only when they have sufficient data about their subordinates. Our proposed methodology can provide support to decision-making in the military. This study is expected to contribute to the prevention of accidents in the military based on scientific analysis of enlisted men and proper management of them.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

An Empirical Study on Motivation Factors and Reward Structure for User's Createve Contents Generation: Focusing on the Mediating Effect of Commitment (창의적인 UCC 제작에 영향을 미치는 동기 및 보상 체계에 대한 연구: 몰입에 매개 효과를 중심으로)

  • Kim, Jin-Woo;Yang, Seung-Hwa;Lim, Seong-Taek;Lee, In-Seong
    • Asia pacific journal of information systems
    • /
    • v.20 no.1
    • /
    • pp.141-170
    • /
    • 2010
  • User created content (UCC) is created and shared by common users on line. From the user's perspective, the increase of UCCs has led to an expansion of alternative means of communications, while from the business perspective UCCs have formed an environment in which an abundant amount of new contents can be produced. Despite outward quantitative growth, however, many aspects of UCCs do not meet the expectations of general users in terms of quality, and this can be observed through pirated contents and user-copied contents. The purpose of this research is to investigate effective methods for fostering production of creative user-generated content. This study proposes two core elements, namely, reward and motivation, which are believed to enhance content creativity as well as the mediating factor and users' committement, which will be effective for bridging the increasing motivation and content creativity. Based on this perspective, this research takes an in-depth look at issues related to constructing the dimensions of reward and motivation in UCC services for creative content product, which are identified in three phases. First, three dimensions of rewards have been proposed: task dimension, social dimension, and organizational dimention. The task dimension rewards are related to the inherent characteristics of a task such as writing blog articles and pasting photos. Four concrete ways of providing task-related rewards in UCC environments are suggested in this study, which include skill variety, task significance, task identity, and autonomy. The social dimensioni rewards are related to the connected relationships among users. The organizational dimension consists of monetary payoff and recognition from others. Second, the two types of motivations are suggested to be affected by the diverse rewards schemes: intrinsic motivation and extrinsic motivation. Intrinsic motivation occurs when people create new UCC contents for its' own sake, whereas extrinsic motivation occurs when people create new contents for other purposes such as fame and money. Third, commitments are suggested to work as important mediating variables between motivation and content creativity. We believe commitments are especially important in online environments because they have been found to exert stronger impacts on the Internet users than other relevant factors do. Two types of commitments are suggested in this study: emotional commitment and continuity commitment. Finally, content creativity is proposed as the final dependent variable in this study. We provide a systematic method to measure the creativity of UCC content based on the prior studies in creativity measurement. The method includes expert evaluation of blog pages posted by the Internet users. In order to test the theoretical model of our study, 133 active blog users were recruited to participate in a group discussion as well as a survey. They were asked to fill out a questionnaire on their commitment, motivation and rewards of creating UCC contents. At the same time, their creativity was measured by independent experts using Torrance Tests of Creative Thinking. Finally, two independent users visited the study participants' blog pages and evaluated their content creativity using the Creative Products Semantic Scale. All the data were compiled and analyzed through structural equation modeling. We first conducted a confirmatory factor analysis to validate the measurement model of our research. It was found that measures used in our study satisfied the requirement of reliability, convergent validity as well as discriminant validity. Given the fact that our measurement model is valid and reliable, we proceeded to conduct a structural model analysis. The results indicated that all the variables in our model had higher than necessary explanatory powers in terms of R-square values. The study results identified several important reward shemes. First of all, skill variety, task importance, task identity, and automony were all found to have significant influences on the intrinsic motivation of creating UCC contents. Also, the relationship with other users was found to have strong influences upon both intrinsic and extrinsic motivation. Finally, the opportunity to get recognition for their UCC work was found to have a significant impact on the extrinsic motivation of UCC users. However, different from our expectation, monetary compensation was found not to have a significant impact on the extrinsic motivation. It was also found that commitment was an important mediating factor in UCC environment between motivation and content creativity. A more fully mediating model was found to have the highest explanation power compared to no-mediation or partially mediated models. This paper ends with implications of the study results. First, from the theoretical perspective this study proposes and empirically validates the commitment as an important mediating factor between motivation and content creativity. This result reflects the characteristics of online environment in which the UCC creation activities occur voluntarily. Second, from the practical perspective this study proposes several concrete reward factors that are germane to the UCC environment, and their effectiveness to the content creativity is estimated. In addition to the quantitive results of relative importance of the reward factrs, this study also proposes concrete ways to provide the rewards in the UCC environment based on the FGI data that are collected after our participants finish asnwering survey questions. Finally, from the methodological perspective, this study suggests and implements a way to measure the UCC content creativity independently from the content generators' creativity, which can be used later by future research on UCC creativity. In sum, this study proposes and validates important reward features and their relations to the motivation, commitment, and the content creativity in UCC environment, which is believed to be one of the most important factors for the success of UCC and Web 2.0. As such, this study can provide significant theoretical as well as practical bases for fostering creativity in UCC contents.

A study on the Relationship between the Degree of Awareness on Low Carbon Green Growth and the Organizational Commitment Focused on the Traditional Retailers (전통시장 상인들의 저탄소 녹색성장에 대한 인식과 조직몰입의 관계에 대한 연구)

  • Yang, Hoe-Chang;Kim, Sung-Il;Park, Young-Ho;Lee, Shang-Nam
    • Journal of Distribution Science
    • /
    • v.9 no.3
    • /
    • pp.37-46
    • /
    • 2011
  • Since the Korean retail industry was made accessible to the big conglomerates and foreign retail companies, local traditional markets have faced serious problems. To sustain the local traditional markets' survival, the Korean government established various remedial policies for addressing, and many scholars published articles to suggest how to find solutions to, the problem. Unfortunately, the results have not been satisfactory. The purpose of this study is to find another way to help the Korean traditional retail market, from the view point of the Green Growth Policy, an initiative designed to address environmentally balanced economic growth in Korea. In order to survive and to maintain sustainable growth, it is incumbent upon retailers in the traditional market to understand the concept of the Green Growth Policy. A survey was conducted as a means of testing the degree of awareness of the Green Growth Policy, as well as determining the relationship between the degree of awareness and the degree of organizational commitment by the retailers in the local traditional markets. Interestingly, we were able to detect some of the features (e.g., they were distinguished by the elderly and the young, as well as low level of education and high level of education) in the traditional market retailers' demographic characteristics. We utilized the analysis of variance (ANOVA) statistical method to simultaneously compare the differences in retailers' demographic characteristics; the results were as follows: Overall, the results showed that the awareness of the Green Growth Policy, the degree of trust in the government's policy, levels of self-efficacy, and levels of organizational commitment were higher with the older traditional market retailers than the younger traditional market retailers. Specifically, the degree of trust in government policies (F=9.964,p < .05), levels of self-efficacy (F=5.532,p < .05), and levels of organizational commitment (F=5.697,p < .05) were statistically significant. Moreover, in the portion of the study that addressed the difference between education levels, all the variables were averaged in the higher education category of the traditional market retailers. Specifically, awareness levels of the Green Growth Policy (F=8.564,p < .005) and levels of self-efficacy (F=6.754,p < .005) were statistically significant. These results revealed that the traditional market retailers' demographic characteristics should be considered important factors in order to realize their policy. The results of the study showed the following: 1) The degree of awareness of the government's Green Growth Policy was statistically significant as it related to traditional market retailers' organizational commitment. 2) The degree of trust of the government's policy was significantly moderated between the awareness of the government's Green Growth Policy and the traditional market retailers' organizational commitment. This result demonstrates that the traditional market retailers' awareness of the government's Green Growth Policy will show more organizational commitment with higher levels of trust of the government's policy. 3) It also revealed that traditional market retailers' self-efficacy was fully mediated between the awareness of the Green Growth Policy of the government and traditional market retailers' organizational commitment. The results suggest that the government should show an interest in showing traditional market retailers how to enhance their traditional markets. Implications and future research directions are also discussed.

  • PDF

Effects of Customers' Relationship Networks on Organizational Performance: Focusing on Facebook Fan Page (고객 간 관계 네트워크가 조직성과에 미치는 영향: 페이스북 기업 팬페이지를 중심으로)

  • Jeon, Su-Hyeon;Kwahk, Kee-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.57-79
    • /
    • 2016
  • It is a rising trend that the number of users using one of the social media channels, the Social Network Service, so called the SNS, is getting increased. As per to this social trend, more companies have interest in this networking platform and start to invest their funds in it. It has received much attention as a tool spreading and expanding the message that a company wants to deliver to its customers and has been recognized as an important channel in terms of the relationship marketing with them. The environment of media that is radically changing these days makes possible for companies to approach their customers in various ways. Particularly, the social network service, which has been developed rapidly, provides the environment that customers can freely talk about products. For companies, it also works as a channel that gives customized information to customers. To succeed in the online environment, companies need to not only build the relationship between companies and customers but focus on the relationship between customers as well. In response to the online environment with the continuous development of technology, companies have tirelessly made the novel marketing strategy. Especially, as the one-to-one marketing to customers become available, it is more important for companies to maintain the relationship marketing with their customers. Among many SNS, Facebook, which many companies use as a communication channel, provides a fan page service for each company that supports its business. Facebook fan page is the platform that the event, information and announcement can be shared with customers using texts, videos, and pictures. Companies open their own fan pages in order to inform their companies and businesses. Such page functions as the websites of companies and has a characteristic of their brand communities such as blogs as well. As Facebook has become the major communication medium with customers, companies recognize its importance as the effective marketing channel, but they still need to investigate their business performances by using Facebook. Although there are infinite potentials in Facebook fan page that even has a function as a community between users, which other platforms do not, it is incomplete to regard companies' Facebook fan pages as communities and analyze them. In this study, it explores the relationship among customers through the network of the Facebook fan page users. The previous studies on a company's Facebook fan page were focused on finding out the effective operational direction by analyzing the use state of the company. However, in this study, it draws out the structural variable of the network, which customer committment can be measured by applying the social network analysis methodology and investigates the influence of the structural characteristics of network on the business performance of companies in an empirical way. Through each company's Facebook fan page, the network of users who engaged in the communication with each company is exploited and it is the one-mode undirected binary network that respectively regards users and the relationship of them in terms of their marketing activities as the node and link. In this network, it draws out the structural variable of network that can explain the customer commitment, who pressed "like," made comments and shared the Facebook marketing message, of each company by calculating density, global clustering coefficient, mean geodesic distance, diameter. By exploiting companies' historical performance such as net income and Tobin's Q indicator as the result variables, this study investigates influence on companies' business performances. For this purpose, it collects the network data on the subjects of 54 companies among KOSPI-listed companies, which have posted more than 100 articles on their Facebook fan pages during the data collection period. Then it draws out the network indicator of each company. The indicator related to companies' performances is calculated, based on the posted value on DART website of the Financial Supervisory Service. From the academic perspective, this study suggests a new approach through the social network analysis methodology to researchers who attempt to study the business-purpose utilization of the social media channel. From the practical perspective, this study proposes the more substantive marketing performance measurements to companies performing marketing activities through the social media and it is expected that it will bring a foundation of establishing smart business strategies by using the network indicators.

Predicting the Direction of the Stock Index by Using a Domain-Specific Sentiment Dictionary (주가지수 방향성 예측을 위한 주제지향 감성사전 구축 방안)

  • Yu, Eunji;Kim, Yoosin;Kim, Namgyu;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.95-110
    • /
    • 2013
  • Recently, the amount of unstructured data being generated through a variety of social media has been increasing rapidly, resulting in the increasing need to collect, store, search for, analyze, and visualize this data. This kind of data cannot be handled appropriately by using the traditional methodologies usually used for analyzing structured data because of its vast volume and unstructured nature. In this situation, many attempts are being made to analyze unstructured data such as text files and log files through various commercial or noncommercial analytical tools. Among the various contemporary issues dealt with in the literature of unstructured text data analysis, the concepts and techniques of opinion mining have been attracting much attention from pioneer researchers and business practitioners. Opinion mining or sentiment analysis refers to a series of processes that analyze participants' opinions, sentiments, evaluations, attitudes, and emotions about selected products, services, organizations, social issues, and so on. In other words, many attempts based on various opinion mining techniques are being made to resolve complicated issues that could not have otherwise been solved by existing traditional approaches. One of the most representative attempts using the opinion mining technique may be the recent research that proposed an intelligent model for predicting the direction of the stock index. This model works mainly on the basis of opinions extracted from an overwhelming number of economic news repots. News content published on various media is obviously a traditional example of unstructured text data. Every day, a large volume of new content is created, digitalized, and subsequently distributed to us via online or offline channels. Many studies have revealed that we make better decisions on political, economic, and social issues by analyzing news and other related information. In this sense, we expect to predict the fluctuation of stock markets partly by analyzing the relationship between economic news reports and the pattern of stock prices. So far, in the literature on opinion mining, most studies including ours have utilized a sentiment dictionary to elicit sentiment polarity or sentiment value from a large number of documents. A sentiment dictionary consists of pairs of selected words and their sentiment values. Sentiment classifiers refer to the dictionary to formulate the sentiment polarity of words, sentences in a document, and the whole document. However, most traditional approaches have common limitations in that they do not consider the flexibility of sentiment polarity, that is, the sentiment polarity or sentiment value of a word is fixed and cannot be changed in a traditional sentiment dictionary. In the real world, however, the sentiment polarity of a word can vary depending on the time, situation, and purpose of the analysis. It can also be contradictory in nature. The flexibility of sentiment polarity motivated us to conduct this study. In this paper, we have stated that sentiment polarity should be assigned, not merely on the basis of the inherent meaning of a word but on the basis of its ad hoc meaning within a particular context. To implement our idea, we presented an intelligent investment decision-support model based on opinion mining that performs the scrapping and parsing of massive volumes of economic news on the web, tags sentiment words, classifies sentiment polarity of the news, and finally predicts the direction of the next day's stock index. In addition, we applied a domain-specific sentiment dictionary instead of a general purpose one to classify each piece of news as either positive or negative. For the purpose of performance evaluation, we performed intensive experiments and investigated the prediction accuracy of our model. For the experiments to predict the direction of the stock index, we gathered and analyzed 1,072 articles about stock markets published by "M" and "E" media between July 2011 and September 2011.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.

Analysis of Metadata Standards of Record Management for Metadata Interoperability From the viewpoint of the Task model and 5W1H (메타데이터 상호운용성을 위한 기록관리 메타데이터 표준 분석 5W1H와 태스크 모델의 관점에서)

  • Baek, Jae-Eun;Sugimoto, Shigeo
    • The Korean Journal of Archival Studies
    • /
    • no.32
    • /
    • pp.127-176
    • /
    • 2012
  • Metadata is well recognized as one of the foundational factors in archiving and long-term preservation of digital resources. There are several metadata standards for records management, archives and preservation, e.g. ISAD(G), EAD, AGRkMs, PREMIS, and OAIS. Consideration is important in selecting appropriate metadata standards in order to design metadata schema that meet the requirements of a particular archival system. Interoperability of metadata with other systems should be considered in schema design. In our previous research, we have presented a feature analysis of metadata standards by identifying the primary resource lifecycle stages where each standard is applied. We have clarified that any single metadata standard cannot cover the whole records lifecycle for archiving and preservation. Through this feature analysis, we analyzed the features of metadata in the whole records lifecycle, and we clarified the relationships between the metadata standards and the stages of the lifecycle. In the previous study, more detailed analysis was left for future study. This paper proposes to analyze the metadata schemas from the viewpoint of tasks performed in the lifecycle. Metadata schemas are primarily defined to describe properties of a resource in accordance with the purposes of description, e.g. finding aids, records management, preservation and so forth. In other words, the metadata standards are resource- and purpose-centric, and the resource lifecycle is not explicitly reflected in the standards. There are no systematic methods for mapping between different metadata standards in accordance with the lifecycle. This paper proposes a method for mapping between metadata standards based on the tasks contained in the resource lifecycle. We first propose a Task Model to clarify tasks applied to resources in each stage of the lifecycle. This model is created as a task-centric model to identify features of metadata standards and to create mappings among elements of those standards. It is important to categorize the elements in order to limit the semantic scope of mapping among elements and decrease the number of combinations of elements for mapping. This paper proposes to use 5W1H (Who, What, Why, When, Where, How) model to categorize the elements. 5W1H categories are generally used for describing events, e.g. news articles. As performing a task on a resource causes an event and metadata elements are used in the event, we consider that the 5W1H categories are adequate to categorize the elements. By using these categories, we determine the features of every element of metadata standards which are AGLS, AGRkMS, PREMIS, EAD, OAIS and an attribute set extracted from DPC decision flow. Then, we perform the element mapping between the standards, and find the relationships between the standards. In this study, we defined a set of terms for each of 5W1H categories, which typically appear in the definition of an element, and used those terms to categorize the elements. For example, if the definition of an element includes the terms such as person and organization that mean a subject which contribute to create, modify a resource the element is categorized into the Who category. A single element can be categorized into one or more 5W1H categories. Thus, we categorized every element of the metadata standards using the 5W1H model, and then, we carried out mapping among the elements in each category. We conclude that the Task Model provides a new viewpoint for metadata schemas and is useful to help us understand the features of metadata standards for records management and archives. The 5W1H model, which is defined based on the Task Model, provides us a core set of categories to semantically classify metadata elements from the viewpoint of an event caused by a task.