• Title/Summary/Keyword: Decision Making and Information Source

Search Result 103, Processing Time 0.026 seconds

Empirical Study on Factors Influencing the Value of Mobile Advertising: From the Perspective of Information Value (정보 가치 관점에서 바라본 모바일 광고 가치의 설명 요인에 관한 실증적 연구)

  • Park, Chul-Woo;Ahn, Joong-Ho;Jahng, Jung-Joo;Kim, Eun-Jin
    • Information Systems Review
    • /
    • v.8 no.2
    • /
    • pp.29-49
    • /
    • 2006
  • The introduction of the Digital Economy has formed a new market producing and trading information. Depending on the current contexts, each user evaluates identical information differently. It is difficult, even though important, to create and deliver the information customized to individual users by using some factors as time, location, and their personal characteristics. Information value, therefore, could be influenced by the capability of information systems to delivery useful information based on individual contexts to the right user immediately at the right time. From this point of view, we argue mobile systems which are able to be aware of individual contexts and deliver contextual information in real time can improve information value easily than other systems can. This research presents the results of an empirical test about antecedents to mobile advertising value. Though context relevance doesn't influence directly mobile advertising value, it plays an important role enhancing information usefulness which has great influence on mobile advertising value. Moreover, to supply information connected with users' context overcomes the effect of irritation. Lastly, entertainment can improve mobile advertising value as satisfying user's hedonic desire beyond the information source supporting decision making.

Current Status of Tire Recycling in Taiwan

  • Shanshin Ton;Taipau Chia;Lee, Ming-Huang;Chien, Yeh-Chung;Shu, Hung-Yee
    • Proceedings of the IEEK Conference
    • /
    • 2001.10a
    • /
    • pp.230-235
    • /
    • 2001
  • There are more than 15 millions cars or motors in Taiwan. According to the statistics from Environmental Protection Administration, the number of resulting scrap tires are near 110 thousand tons each year. The tire recycle programs in Taiwan were first conducted in 1989 and executed by ROC Scrap Tire Foundation. However, the current efficiency of the tire recycling industry still needs to be improved to minimize the environmental problem or fire hazards caused by scrap tires storage. Ten major tire-recycling factories are surveyed in this study. The investigations include the source of scrap tire, the shredding process, the market of products, the management of wastes disposal, and the difficulties of these sectors. As the varieties of the shredding machines of the recycle factories, there are three kinds of final products which include powder, granular, and chips. The wastes, wires and fibers, produced by the shredding process are the major problems fur all the factories. The percentage of the wire and fiber removal from rubbers still needs to be increased. The best approaches found in this study to increase the efficiency of scrap tire recycling processes are proposed which include the improvement of magnetic separation system fiber/rubber separation system and the minimization of waste disposal. A categorized standard of the processing outputs is suggested as a reference for the decision-making of the tire-recycling factories.

  • PDF

A Study on Deducation of Standard API Sharing Data Elements for Policy Study Information Sharing (정책연구정보 공유를 위한 표준 API 공유 데이터 요소 도출에 관한 연구)

  • Park, Yang-Ha
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.32 no.1
    • /
    • pp.391-413
    • /
    • 2021
  • Policy study information is the essential source of information in every step of decision making process to plan, execute and assess the national operation policy. The policy study subject of a national policy research center from study design the performance assessment on its practical effect is managed via thorough process to secure its effectiveness and efficiency. However, the directly exposed information to the practical user or the public who are in need of actual policy study information is the resource published in a form of policy study report, the final result. NKIS operated by the National Research Council for Economics, Humanities and Social Sciences under the Office for Government Policy Coordination, Prime Minister's Secretariat is a public information offering service that conduct integrated management on study reports from cooperative study among institutes along with policy outcome from 27 national policy research centers. This study aims to introduce the current status of operation and information management of NKIS, apprehend the management characteristics of policy study information resources of national policy research center, and deduce remarks that need to be considered for API with external service for the derivation of standardized sharing data element.

The Big Data Analytics Regarding the Cadastral Resurvey News Articles

  • Joo, Yong-Jin;Kim, Duck-Ho
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.6
    • /
    • pp.651-659
    • /
    • 2014
  • With the popularization of big data environment, big data have been highlighted as a key information strategy to establish national spatial data infrastructure for a scientific land policy and the extension of the creative economy. Especially interesting from our point of view is the cadastral information is a core national information source that forms the basis of spatial information that leads to people's daily life including the production and consumption of information related to real estate. The purpose of our paper is to suggest the scheme of big data analytics with respect to the articles of cadastral resurvey project in order to approach cadastral information in terms of spatial data integration. As specific research method, the TM (Text Mining) package from R was used to read various formats of news reports as texts, and nouns were extracted by using the KoNLP package. That is, we searched the main keywords regarding cadastral resurvey, performing extraction of compound noun and data mining analysis. And visualization of the results was presented. In addition, new reports related to cadastral resurvey between 2012 and 2014 were searched in newspapers, and nouns were extracted from the searched data for the data mining analysis of cadastral information. Furthermore, the approval rating, reliability, and improvement of rules were presented through correlation analyses among the extracted compound nouns. As a result of the correlation analysis among the most frequently used ones of the extracted nouns, five groups of data consisting of 133 keywords were generated. The most frequently appeared words were "cadastral resurvey," "civil complaint," "dispute," "cadastral survey," "lawsuit," "settlement," "mediation," "discrepant land," and "parcel." In Conclusions, the cadastral resurvey performed in some local governments has been proceeding smoothly as positive results. On the other hands, disputes from owner of land have been provoking a stream of complaints from parcel surveying for the cadastral resurvey. Through such keyword analysis, various public opinion and the types of civil complaints related to the cadastral resurvey project can be identified to prevent them through pre-emptive responses for direct call centre on the cadastral surveying, Electronic civil service and customer counseling, and high quality services about cadastral information can be provided. This study, therefore, provides a stepping stones for developing an account of big data analytics which is able to comprehensively examine and visualize a variety of news report and opinions in cadastral resurvey project promotion. Henceforth, this will contribute to establish the foundation for a framework of the information utilization, enabling scientific decision making with speediness and correctness.

The Impact of Nuclear Power Generation on Wholesale Electricity Market Price (원자력발전이 전력가격에 미치는 영향 분석)

  • Jung, Sukwan;Lim, Nara;Won, DooHwan
    • Environmental and Resource Economics Review
    • /
    • v.24 no.4
    • /
    • pp.629-655
    • /
    • 2015
  • Nuclear power generation is a major power source which accounts for more than 30% of domestic electricity generation. Electricity market needs to secure stability of base load. This study aimed at analyzing relationships between nuclear power generation and wholesale electricity price (SMP: System Marginal Price) in Korea. For this we conducted ARDL(Autoregressive Distributed Lag) approach and Granger causality test. We found that in terms of total effects nuclear power supply had a positive relationship with SMP while nuclear capacity had a negative relationship with SMP. There is a unidirectional Granger causality from nuclear power supply to SMP while the reverse was not. Nuclear power is closely related to SMP and provides useful information for decision making.

Bayesian Network-based Probabilistic Management of Software Metrics for Refactoring (리팩토링을 위한 소프트웨어 메트릭의 베이지안 네트워크 기반 확률적 관리)

  • Choi, Seunghee;Lee, Goo Yeon
    • Journal of KIISE
    • /
    • v.43 no.12
    • /
    • pp.1334-1341
    • /
    • 2016
  • In recent years, the importance of managing software defects in the implementation stage has emerged because of the rapid development and wide-range usage of intelligent smart devices. Even if not a few studies have been conducted on the prediction models for software defects, their outcomes have not been widely shared. This paper proposes an efficient probabilistic management model of software metrics based on the Bayesian network, to overcome limits such as binary defect prediction models. We expect the proposed model to configure the Bayesian network by taking advantage of various software metrics, which can help in identifying improvements for refactoring. Once the source code has improved through code refactoring, the measured related metric values will also change. The proposed model presents probability values reflecting the effects after defect removal, which can be achieved by improving metrics through refactoring. This model could cope with the conclusive binary predictions, and consequently secure flexibilities on decision making, using indeterminate probability values.

An Efficient Incremental View Maintenance in Data Warehouses (데이타 웨어하우스에서 효과적인 점진적 뷰 관리)

  • Lee, Ki-Yong;Kim, Myoung-Ho
    • Journal of KIISE:Databases
    • /
    • v.27 no.2
    • /
    • pp.175-184
    • /
    • 2000
  • A data warehouse is an integrated and summarized collection of data that can efficiently support decision making process. The summarized data at the data warehouse is often stored in materialized views. These materialized views need to be updated when source data change. Since the propagation of updates to the views may impose a significant overhead, it is very important to update the warehouse views efficiently. Though various strategies have been proposed to maintain views in the past, they typically require too much accesses to the data sources when the changes of multiple data sources have to be reflected in the view. In this paper we propose an efficient view update strategy that uses relatively small number of accesses to the data sources. We also show the performance advantage of our method over other existing methods through experiments using TPC-D data and queries.

  • PDF

A Real-Time Stock Market Prediction Using Knowledge Accumulation (지식 누적을 이용한 실시간 주식시장 예측)

  • Kim, Jin-Hwa;Hong, Kwang-Hun;Min, Jin-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.109-130
    • /
    • 2011
  • One of the major problems in the area of data mining is the size of the data, as most data set has huge volume these days. Streams of data are normally accumulated into data storages or databases. Transactions in internet, mobile devices and ubiquitous environment produce streams of data continuously. Some data set are just buried un-used inside huge data storage due to its huge size. Some data set is quickly lost as soon as it is created as it is not saved due to many reasons. How to use this large size data and to use data on stream efficiently are challenging questions in the study of data mining. Stream data is a data set that is accumulated to the data storage from a data source continuously. The size of this data set, in many cases, becomes increasingly large over time. To mine information from this massive data, it takes too many resources such as storage, money and time. These unique characteristics of the stream data make it difficult and expensive to store all the stream data sets accumulated over time. Otherwise, if one uses only recent or partial of data to mine information or pattern, there can be losses of valuable information, which can be useful. To avoid these problems, this study suggests a method efficiently accumulates information or patterns in the form of rule set over time. A rule set is mined from a data set in stream and this rule set is accumulated into a master rule set storage, which is also a model for real-time decision making. One of the main advantages of this method is that it takes much smaller storage space compared to the traditional method, which saves the whole data set. Another advantage of using this method is that the accumulated rule set is used as a prediction model. Prompt response to the request from users is possible anytime as the rule set is ready anytime to be used to make decisions. This makes real-time decision making possible, which is the greatest advantage of this method. Based on theories of ensemble approaches, combination of many different models can produce better prediction model in performance. The consolidated rule set actually covers all the data set while the traditional sampling approach only covers part of the whole data set. This study uses a stock market data that has a heterogeneous data set as the characteristic of data varies over time. The indexes in stock market data can fluctuate in different situations whenever there is an event influencing the stock market index. Therefore the variance of the values in each variable is large compared to that of the homogeneous data set. Prediction with heterogeneous data set is naturally much more difficult, compared to that of homogeneous data set as it is more difficult to predict in unpredictable situation. This study tests two general mining approaches and compare prediction performances of these two suggested methods with the method we suggest in this study. The first approach is inducing a rule set from the recent data set to predict new data set. The seocnd one is inducing a rule set from all the data which have been accumulated from the beginning every time one has to predict new data set. We found neither of these two is as good as the method of accumulated rule set in its performance. Furthermore, the study shows experiments with different prediction models. The first approach is building a prediction model only with more important rule sets and the second approach is the method using all the rule sets by assigning weights on the rules based on their performance. The second approach shows better performance compared to the first one. The experiments also show that the suggested method in this study can be an efficient approach for mining information and pattern with stream data. This method has a limitation of bounding its application to stock market data. More dynamic real-time steam data set is desirable for the application of this method. There is also another problem in this study. When the number of rules is increasing over time, it has to manage special rules such as redundant rules or conflicting rules efficiently.

A Deep Learning Based Recommender System Using Visual Information (시각 정보를 활용한 딥러닝 기반 추천 시스템)

  • Moon, Hyunsil;Lim, Jinhyuk;Kim, Doyeon;Cho, Yoonho
    • Knowledge Management Research
    • /
    • v.21 no.3
    • /
    • pp.27-44
    • /
    • 2020
  • In order to solve the user's information overload problem, recommender systems infer users' preferences and suggest items that match them. The collaborative filtering (CF), the most successful recommendation algorithm, has been improving performance until recently and applied to various business domains. Visual information, such as book covers, could influence consumers' purchase decision making. However, CF-based recommender systems have rarely considered for visual information. In this study, we propose VizNCS, a CF-based deep learning model that uses visual information as additional information. VizNCS consists of two phases. In the first phase, we build convolutional neural networks (CNN) to extract visual features from image data. In the second phase, we supply the visual features to the NCF model that is known to easy to extend to other information among the deep learning-based recommendation systems. As the results of the performance comparison experiments, VizNCS showed higher performance than the vanilla NCF. We also conducted an additional experiment to see if the visual information affects differently depending on the product category. The result enables us to identify which categories were affected and which were not. We expect VizNCS to improve the recommender system performance and expand the recommender system's data source to visual information.

A Study on Dataset Generation Method for Korean Language Information Extraction from Generative Large Language Model and Prompt Engineering (생성형 대규모 언어 모델과 프롬프트 엔지니어링을 통한 한국어 텍스트 기반 정보 추출 데이터셋 구축 방법)

  • Jeong Young Sang;Ji Seung Hyun;Kwon Da Rong Sae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.11
    • /
    • pp.481-492
    • /
    • 2023
  • This study explores how to build a Korean dataset to extract information from text using generative large language models. In modern society, mixed information circulates rapidly, and effectively categorizing and extracting it is crucial to the decision-making process. However, there is still a lack of Korean datasets for training. To overcome this, this study attempts to extract information using text-based zero-shot learning using a generative large language model to build a purposeful Korean dataset. In this study, the language model is instructed to output the desired result through prompt engineering in the form of "system"-"instruction"-"source input"-"output format", and the dataset is built by utilizing the in-context learning characteristics of the language model through input sentences. We validate our approach by comparing the generated dataset with the existing benchmark dataset, and achieve 25.47% higher performance compared to the KLUE-RoBERTa-large model for the relation information extraction task. The results of this study are expected to contribute to AI research by showing the feasibility of extracting knowledge elements from Korean text. Furthermore, this methodology can be utilized for various fields and purposes, and has potential for building various Korean datasets.