• Title/Summary/Keyword: Intelligent Interface

Search Result 646, Processing Time 0.03 seconds

Building a Korean Sentiment Lexicon Using Collective Intelligence (집단지성을 이용한 한글 감성어 사전 구축)

  • An, Jungkook;Kim, Hee-Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.49-67
    • /
    • 2015
  • Recently, emerging the notion of big data and social media has led us to enter data's big bang. Social networking services are widely used by people around the world, and they have become a part of major communication tools for all ages. Over the last decade, as online social networking sites become increasingly popular, companies tend to focus on advanced social media analysis for their marketing strategies. In addition to social media analysis, companies are mainly concerned about propagating of negative opinions on social networking sites such as Facebook and Twitter, as well as e-commerce sites. The effect of online word of mouth (WOM) such as product rating, product review, and product recommendations is very influential, and negative opinions have significant impact on product sales. This trend has increased researchers' attention to a natural language processing, such as a sentiment analysis. A sentiment analysis, also refers to as an opinion mining, is a process of identifying the polarity of subjective information and has been applied to various research and practical fields. However, there are obstacles lies when Korean language (Hangul) is used in a natural language processing because it is an agglutinative language with rich morphology pose problems. Therefore, there is a lack of Korean natural language processing resources such as a sentiment lexicon, and this has resulted in significant limitations for researchers and practitioners who are considering sentiment analysis. Our study builds a Korean sentiment lexicon with collective intelligence, and provides API (Application Programming Interface) service to open and share a sentiment lexicon data with the public (www.openhangul.com). For the pre-processing, we have created a Korean lexicon database with over 517,178 words and classified them into sentiment and non-sentiment words. In order to classify them, we first identified stop words which often quite likely to play a negative role in sentiment analysis and excluded them from our sentiment scoring. In general, sentiment words are nouns, adjectives, verbs, adverbs as they have sentimental expressions such as positive, neutral, and negative. On the other hands, non-sentiment words are interjection, determiner, numeral, postposition, etc. as they generally have no sentimental expressions. To build a reliable sentiment lexicon, we have adopted a concept of collective intelligence as a model for crowdsourcing. In addition, a concept of folksonomy has been implemented in the process of taxonomy to help collective intelligence. In order to make up for an inherent weakness of folksonomy, we have adopted a majority rule by building a voting system. Participants, as voters were offered three voting options to choose from positivity, negativity, and neutrality, and the voting have been conducted on one of the largest social networking sites for college students in Korea. More than 35,000 votes have been made by college students in Korea, and we keep this voting system open by maintaining the project as a perpetual study. Besides, any change in the sentiment score of words can be an important observation because it enables us to keep track of temporal changes in Korean language as a natural language. Lastly, our study offers a RESTful, JSON based API service through a web platform to make easier support for users such as researchers, companies, and developers. Finally, our study makes important contributions to both research and practice. In terms of research, our Korean sentiment lexicon plays an important role as a resource for Korean natural language processing. In terms of practice, practitioners such as managers and marketers can implement sentiment analysis effectively by using Korean sentiment lexicon we built. Moreover, our study sheds new light on the value of folksonomy by combining collective intelligence, and we also expect to give a new direction and a new start to the development of Korean natural language processing.

A Study on the Design of Case-based Reasoning Office Knowledge Recommender System for Office Professionals (사례기반추론을 이용한 사무지식 추천시스템)

  • Kim, Myong-Ok;Na, Jung-Ah
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.131-146
    • /
    • 2011
  • It is becoming more essential than ever for office professionals to become competent in information collection/gathering and problem solving in today's global business society. In particular, office professionals do not only assist simple chores but are also forced to make decisions as quickly and efficiently as possible in problematic situations that can end in either profit or loss to their company. Since office professionals rely heavily on their tacit knowledge to solve problems that arise in everyday business situations, it is truly helpful and efficient to refer to similar business cases from the past and share or reuse such previous business knowledge for better performance results. Case-based reasoning(CBR) is a problem-solving method which utilizes previous similar cases to solve problems. Through CBR, the closest case to the current business situation can be searched and retrieved from the case or knowledge base and can be referred to for a new solution. This reduces the time and resources needed and increase success probability. The main purpose of this study is to design a system called COKRS(Case-based reasoning Office Knowledge Recommender System) and develop a prototype for it. COKRS manages cases and their meta data, accepts key words from the user and searches the casebase for the most similar past case to the input keyword, and communicates with users to collect information about the quality of the case provided and continuously apply the information to update values on the similarity table. Core concepts like system architecture, definition of a case, meta database, similarity table have been introduced, and also an algorithm to retrieve all similar cases from past work history has also been proposed. In this research, a case is best defined as a work experience in office administration. However, defining a case in office administration was not an easy task in reality. We surveyed 10 office professionals in order to get an idea of how to define a case in office administration and found out that in most cases any type of office work is to be recorded digitally and/or non-digitally. Therefore, we have defined a record or document case as for COKRS. Similarity table was composed of items of the result of job analysis for office professionals conducted in a previous research. Values between items of the similarity table were initially set to those from researchers' experiences and literature review. The results of this study could also be utilized in other areas of business for knowledge sharing wherever it is necessary and beneficial to share and learn from past experiences. We expect this research to be a reference for researchers and developers who are in this area or interested in office knowledge recommendation system based on CBR. Focus group interview(FGI) was conducted with ten administrative assistants carefully selected from various areas of business. They were given a chance to try out COKRS in an actual work setting and make some suggestions for future improvement. FGI has identified the user-interface for saving and searching cases for keywords as the most positive aspect of COKRS, and has identified the most urgently needed improvement as transforming tacit knowledge and knowhow into recorded documents more efficiently. Also, the focus group has mentioned that it is essential to secure enough support, encouragement, and reward from the company and promote positive attitude and atmosphere for knowledge sharing for everybody's benefit in the company.

Ontology-based User Customized Search Service Considering User Intention (온톨로지 기반의 사용자 의도를 고려한 맞춤형 검색 서비스)

  • Kim, Sukyoung;Kim, Gunwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.129-143
    • /
    • 2012
  • Recently, the rapid progress of a number of standardized web technologies and the proliferation of web users in the world bring an explosive increase of producing and consuming information documents on the web. In addition, most companies have produced, shared, and managed a huge number of information documents that are needed to perform their businesses. They also have discretionally raked, stored and managed a number of web documents published on the web for their business. Along with this increase of information documents that should be managed in the companies, the need of a solution to locate information documents more accurately among a huge number of information sources have increased. In order to satisfy the need of accurate search, the market size of search engine solution market is becoming increasingly expended. The most important functionality among much functionality provided by search engine is to locate accurate information documents from a huge information sources. The major metric to evaluate the accuracy of search engine is relevance that consists of two measures, precision and recall. Precision is thought of as a measure of exactness, that is, what percentage of information considered as true answer are actually such, whereas recall is a measure of completeness, that is, what percentage of true answer are retrieved as such. These two measures can be used differently according to the applied domain. If we need to exhaustively search information such as patent documents and research papers, it is better to increase the recall. On the other hand, when the amount of information is small scale, it is better to increase precision. Most of existing web search engines typically uses a keyword search method that returns web documents including keywords which correspond to search words entered by a user. This method has a virtue of locating all web documents quickly, even though many search words are inputted. However, this method has a fundamental imitation of not considering search intention of a user, thereby retrieving irrelevant results as well as relevant ones. Thus, it takes additional time and effort to set relevant ones out from all results returned by a search engine. That is, keyword search method can increase recall, while it is difficult to locate web documents which a user actually want to find because it does not provide a means of understanding the intention of a user and reflecting it to a progress of searching information. Thus, this research suggests a new method of combining ontology-based search solution with core search functionalities provided by existing search engine solutions. The method enables a search engine to provide optimal search results by inferenceing the search intention of a user. To that end, we build an ontology which contains concepts and relationships among them in a specific domain. The ontology is used to inference synonyms of a set of search keywords inputted by a user, thereby making the search intention of the user reflected into the progress of searching information more actively compared to existing search engines. Based on the proposed method we implement a prototype search system and test the system in the patent domain where we experiment on searching relevant documents associated with a patent. The experiment shows that our system increases the both recall and precision in accuracy and augments the search productivity by using improved user interface that enables a user to interact with our search system effectively. In the future research, we will study a means of validating the better performance of our prototype system by comparing other search engine solution and will extend the applied domain into other domains for searching information such as portal.

Evaluation of Web Service Similarity Assessment Methods (웹서비스 유사성 평가 방법들의 실험적 평가)

  • Hwang, You-Sub
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.4
    • /
    • pp.1-22
    • /
    • 2009
  • The World Wide Web is transitioning from being a mere collection of documents that contain useful information toward providing a collection of services that perform useful tasks. The emerging Web service technology has been envisioned as the next technological wave and is expected to play an important role in this recent transformation of the Web. By providing interoperable interface standards for application-to-application communication, Web services can be combined with component based software development to promote application interaction and integration both within and across enterprises. To make Web services for service-oriented computing operational, it is important that Web service repositories not only be well-structured but also provide efficient tools for developers to find reusable Web service components that meet their needs. As the potential of Web services for service-oriented computing is being widely recognized, the demand for effective Web service discovery mechanisms is concomitantly growing. A number of techniques for Web service discovery have been proposed, but the discovery challenge has not been satisfactorily addressed. Unfortunately, most existing solutions are either too rudimentary to be useful or too domain dependent to be generalizable. In this paper, we propose a Web service organizing framework that combines clustering techniques with string matching and leverages the semantics of the XML-based service specification in WSDL documents. We believe that this is one of the first attempts at applying data mining techniques in the Web service discovery domain. Our proposed approach has several appealing features : (1) It minimizes the requirement of prior knowledge from both service consumers and publishers; (2) It avoids exploiting domain dependent ontologies; and (3) It is able to visualize the semantic relationships among Web services. We have developed a prototype system based on the proposed framework using an unsupervised artificial neural network and empirically evaluated the proposed approach and tool using real Web service descriptions drawn from operational Web service registries. We report on some preliminary results demonstrating the efficacy of the proposed approach.

  • PDF

A MVC Framework for Visualizing Text Data (텍스트 데이터 시각화를 위한 MVC 프레임워크)

  • Choi, Kwang Sun;Jeong, Kyo Sung;Kim, Soo Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.39-58
    • /
    • 2014
  • As the importance of big data and related technologies continues to grow in the industry, it has become highlighted to visualize results of processing and analyzing big data. Visualization of data delivers people effectiveness and clarity for understanding the result of analyzing. By the way, visualization has a role as the GUI (Graphical User Interface) that supports communications between people and analysis systems. Usually to make development and maintenance easier, these GUI parts should be loosely coupled from the parts of processing and analyzing data. And also to implement a loosely coupled architecture, it is necessary to adopt design patterns such as MVC (Model-View-Controller) which is designed for minimizing coupling between UI part and data processing part. On the other hand, big data can be classified as structured data and unstructured data. The visualization of structured data is relatively easy to unstructured data. For all that, as it has been spread out that the people utilize and analyze unstructured data, they usually develop the visualization system only for each project to overcome the limitation traditional visualization system for structured data. Furthermore, for text data which covers a huge part of unstructured data, visualization of data is more difficult. It results from the complexity of technology for analyzing text data as like linguistic analysis, text mining, social network analysis, and so on. And also those technologies are not standardized. This situation makes it more difficult to reuse the visualization system of a project to other projects. We assume that the reason is lack of commonality design of visualization system considering to expanse it to other system. In our research, we suggest a common information model for visualizing text data and propose a comprehensive and reusable framework, TexVizu, for visualizing text data. At first, we survey representative researches in text visualization era. And also we identify common elements for text visualization and common patterns among various cases of its. And then we review and analyze elements and patterns with three different viewpoints as structural viewpoint, interactive viewpoint, and semantic viewpoint. And then we design an integrated model of text data which represent elements for visualization. The structural viewpoint is for identifying structural element from various text documents as like title, author, body, and so on. The interactive viewpoint is for identifying the types of relations and interactions between text documents as like post, comment, reply and so on. The semantic viewpoint is for identifying semantic elements which extracted from analyzing text data linguistically and are represented as tags for classifying types of entity as like people, place or location, time, event and so on. After then we extract and choose common requirements for visualizing text data. The requirements are categorized as four types which are structure information, content information, relation information, trend information. Each type of requirements comprised with required visualization techniques, data and goal (what to know). These requirements are common and key requirement for design a framework which keep that a visualization system are loosely coupled from data processing or analyzing system. Finally we designed a common text visualization framework, TexVizu which is reusable and expansible for various visualization projects by collaborating with various Text Data Loader and Analytical Text Data Visualizer via common interfaces as like ITextDataLoader and IATDProvider. And also TexVisu is comprised with Analytical Text Data Model, Analytical Text Data Storage and Analytical Text Data Controller. In this framework, external components are the specifications of required interfaces for collaborating with this framework. As an experiment, we also adopt this framework into two text visualization systems as like a social opinion mining system and an online news analysis system.

The Brand Personality Effect: Communicating Brand Personality on Twitter and its Influence on Online Community Engagement (브랜드 개성 효과: 트위터 상의 브랜드 개성 전달이 온라인 커뮤니티 참여에 미치는 영향)

  • Cruz, Ruth Angelie B.;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.67-101
    • /
    • 2014
  • The use of new technology greatly shapes the marketing strategies used by companies to engage their consumers. Among these new technologies, social media is used to reach out to the organization's audience online. One of the most popular social media channels to date is the microblogging platform Twitter. With 500 million tweets sent on average daily, the microblogging platform is definitely a rich source of data for researchers, and a lucrative marketing medium for companies. Nonetheless, one of the challenges for companies in developing an effective Twitter campaign is the limited theoretical and empirical evidence on the proper organizational usage of Twitter despite its potential advantages for a firm's external communications. The current study aims to provide empirical evidence on how firms can utilize Twitter effectively in their marketing communications using the association between brand personality and brand engagement that several branding researchers propose. The study extends Aaker's previous empirical work on brand personality by applying the Brand Personality Scale to explore whether Twitter brand communities convey distinctive brand personalities online and its influence on the communities' level or intensity of consumer engagement and sentiment quality. Moreover, the moderating effect of the product involvement construct in consumer engagement is also measured. By collecting data for a period of eight weeks using the publicly available Twitter application programming interface (API) from 23 accounts of Twitter-verified business-to-consumer (B2C) brands, we analyze the validity of the paper's hypothesis by using computerized content analysis and opinion mining. The study is the first to compare Twitter marketing across organizations using the brand personality concept. It demonstrates a potential basis for Twitter strategies and discusses the benefits of these strategies, thus providing a framework of analysis for Twitter practice and strategic direction for companies developing their use of Twitter to communicate with their followers on this social media platform. This study has four specific research objectives. The first objective is to examine the applicability of brand personality dimensions used in marketing research to online brand communities on Twitter. The second is to establish a connection between the congruence of offline and online brand personalities in building a successful social media brand community. Third, we test the moderating effect of product involvement in the effect of brand personality on brand community engagement. Lastly, we investigate the sentiment quality of consumer messages to the firms that succeed in communicating their brands' personalities on Twitter.

A Template-based Interactive University Timetabling Support System (템플릿 기반의 상호대화형 전공강의시간표 작성지원시스템)

  • Chang, Yong-Sik;Jeong, Ye-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.121-145
    • /
    • 2010
  • University timetabling depending on the educational environments of universities is an NP-hard problem that the amount of computation required to find solutions increases exponentially with the problem size. For many years, there have been lots of studies on university timetabling from the necessity of automatic timetable generation for students' convenience and effective lesson, and for the effective allocation of subjects, lecturers, and classrooms. Timetables are classified into a course timetable and an examination timetable. This study focuses on the former. In general, a course timetable for liberal arts is scheduled by the office of academic affairs and a course timetable for major subjects is scheduled by each department of a university. We found several problems from the analysis of current course timetabling in departments. First, it is time-consuming and inefficient for each department to do the routine and repetitive timetabling work manually. Second, many classes are concentrated into several time slots in a timetable. This tendency decreases the effectiveness of students' classes. Third, several major subjects might overlap some required subjects in liberal arts at the same time slots in the timetable. In this case, it is required that students should choose only one from the overlapped subjects. Fourth, many subjects are lectured by same lecturers every year and most of lecturers prefer the same time slots for the subjects compared with last year. This means that it will be helpful if departments reuse the previous timetables. To solve such problems and support the effective course timetabling in each department, this study proposes a university timetabling support system based on two phases. In the first phase, each department generates a timetable template from the most similar timetable case, which is based on case-based reasoning. In the second phase, the department schedules a timetable with the help of interactive user interface under the timetabling criteria, which is based on rule-based approach. This study provides the illustrations of Hanshin University. We classified timetabling criteria into intrinsic and extrinsic criteria. In intrinsic criteria, there are three criteria related to lecturer, class, and classroom which are all hard constraints. In extrinsic criteria, there are four criteria related to 'the numbers of lesson hours' by the lecturer, 'prohibition of lecture allocation to specific day-hours' for committee members, 'the number of subjects in the same day-hour,' and 'the use of common classrooms.' In 'the numbers of lesson hours' by the lecturer, there are three kinds of criteria : 'minimum number of lesson hours per week,' 'maximum number of lesson hours per week,' 'maximum number of lesson hours per day.' Extrinsic criteria are also all hard constraints except for 'minimum number of lesson hours per week' considered as a soft constraint. In addition, we proposed two indices for measuring similarities between subjects of current semester and subjects of the previous timetables, and for evaluating distribution degrees of a scheduled timetable. Similarity is measured by comparison of two attributes-subject name and its lecturer-between current semester and a previous semester. The index of distribution degree, based on information entropy, indicates a distribution of subjects in the timetable. To show this study's viability, we implemented a prototype system and performed experiments with the real data of Hanshin University. Average similarity from the most similar cases of all departments was estimated as 41.72%. It means that a timetable template generated from the most similar case will be helpful. Through sensitivity analysis, the result shows that distribution degree will increase if we set 'the number of subjects in the same day-hour' to more than 90%.

Design and Implementation of IoT based Low cost, Effective Learning Mechanism for Empowering STEM Education in India

  • Simmi Chawla;Parul Tomar;Sapna Gambhir
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.4
    • /
    • pp.163-169
    • /
    • 2024
  • India is a developing nation and has come with comprehensive way in modernizing its reducing poverty, economy and rising living standards for an outsized fragment of its residents. The STEM (Science, Technology, Engineering, and Mathematics) education plays an important role in it. STEM is an educational curriculum that emphasis on the subjects of "science, technology, engineering, and mathematics". In traditional education scenario, these subjects are taught independently, but according to the educational philosophy of STEM that teaches these subjects together in project-based lessons. STEM helps the students in his holistic development. Youth unemployment is the biggest concern due to lack of adequate skills. There is a huge skill gap behind jobless engineers and the question arises how we can prepare engineers for a better tomorrow? Now a day's Industry 4.0 is a new fourth industrial revolution which is an intelligent networking of machines and processes for industry through ICT. It is based upon the usage of cyber-physical systems and Internet of Things (IoT). Industrial revolution does not influence only production but also educational system as well. IoT in academics is a new revolution to the Internet technology, which introduced "Smartness" in the entire IT infrastructure. To improve socio-economic status of the India students must equipped with 21st century digital skills and Universities, colleges must provide individual learning kits to their students which can help them in enhancing their productivity and learning outcomes. The major goal of this paper is to present a low cost, effective learning mechanism for STEM implementation using Raspberry Pi 3+ model (Single board computer) and Node Red open source visual programming tool which is developed by IBM for wiring hardware devices together. These tools are broadly used to provide hands on experience on IoT fundamentals during teaching and learning. This paper elaborates the appropriateness and the practicality of these concepts via an example by implementing a user interface (UI) and Dashboard in Node-RED where dashboard palette is used for demonstration with switch, slider, gauge and Raspberry pi palette is used to connect with GPIO pins present on Raspberry pi board. An LED light is connected with a GPIO pin as an output pin. In this experiment, it is shown that the Node-Red dashboard is accessing on Raspberry pi and via Smartphone as well. In the final step results are shown in an elaborate manner. Conversely, inadequate Programming skills in students are the biggest challenge because without good programming skills there would be no pioneers in engineering, robotics and other areas. Coding plays an important role to increase the level of knowledge on a wide scale and to encourage the interest of students in coding. Today Python language which is Open source and most demanding languages in the industry in order to know data science and algorithms, understanding computer science would not be possible without science, technology, engineering and math. In this paper a small experiment is also done with an LED light via writing source code in python. These tiny experiments are really helpful to encourage the students and give play way to learn these advance technologies. The cost estimation is presented in tabular form for per learning kit provided to the students for Hands on experiments. Some Popular In addition, some Open source tools for experimenting with IoT Technology are described. Students can enrich their knowledge by doing lots of experiments with these freely available software's and this low cost hardware in labs or learning kits provided to them.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

A Study on Public Interest-based Technology Valuation Models in Water Resources Field (수자원 분야 공익형 기술가치평가 시스템에 대한 연구)

  • Ryu, Seung-Mi;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.177-198
    • /
    • 2018
  • Recently, as economic property it has become necessary to acquire and utilize the framework for water resource measurement and performance management as the property of water resources changes to hold "public property". To date, the evaluation of water technology has been carried out by feasibility study analysis or technology assessment based on net present value (NPV) or benefit-to-cost (B/C) effect, however it is not yet systemized in terms of valuation models to objectively assess an economic value of technology-based business to receive diffusion and feedback of research outcomes. Therefore, K-water (known as a government-supported public company in Korea) company feels the necessity to establish a technology valuation framework suitable for technical characteristics of water resources fields in charge and verify an exemplified case applied to the technology. The K-water evaluation technology applied to this study, as a public interest goods, can be used as a tool to measure the value and achievement contributed to society and to manage them. Therefore, by calculating the value in which the subject technology contributed to the entire society as a public resource, we make use of it as a basis information for the advertising medium of performance on the influence effect of the benefits or the necessity of cost input, and then secure the legitimacy for large-scale R&D cost input in terms of the characteristics of public technology. Hence, K-water company, one of the public corporation in Korea which deals with public goods of 'water resources', will be able to establish a commercialization strategy for business operation and prepare for a basis for the performance calculation of input R&D cost. In this study, K-water has developed a web-based technology valuation model for public interest type water resources based on the technology evaluation system that is suitable for the characteristics of a technology in water resources fields. In particular, by utilizing the evaluation methodology of the Institute of Advanced Industrial Science and Technology (AIST) in Japan to match the expense items to the expense accounts based on the related benefit items, we proposed the so-called 'K-water's proprietary model' which involves the 'cost-benefit' approach and the FCF (Free Cash Flow), and ultimately led to build a pipeline on the K-water research performance management system and then verify the practical case of a technology related to "desalination". We analyze the embedded design logic and evaluation process of web-based valuation system that reflects characteristics of water resources technology, reference information and database(D/B)-associated logic for each model to calculate public interest-based and profit-based technology values in technology integrated management system. We review the hybrid evaluation module that reflects the quantitative index of the qualitative evaluation indices reflecting the unique characteristics of water resources and the visualized user-interface (UI) of the actual web-based evaluation, which both are appended for calculating the business value based on financial data to the existing web-based technology valuation systems in other fields. K-water's technology valuation model is evaluated by distinguishing between public-interest type and profitable-type water technology. First, evaluation modules in profit-type technology valuation model are designed based on 'profitability of technology'. For example, the technology inventory K-water holds has a number of profit-oriented technologies such as water treatment membranes. On the other hand, the public interest-type technology valuation is designed to evaluate the public-interest oriented technology such as the dam, which reflects the characteristics of public benefits and costs. In order to examine the appropriateness of the cost-benefit based public utility valuation model (i.e. K-water specific technology valuation model) presented in this study, we applied to practical cases from calculation of benefit-to-cost analysis on water resource technology with 20 years of lifetime. In future we will additionally conduct verifying the K-water public utility-based valuation model by each business model which reflects various business environmental characteristics.