• Title/Summary/Keyword: Users Knowledge

Search Result 1,341, Processing Time 0.028 seconds

A Study on the Intention to Use ChatGPT Focusing on the Moderating Effect of the MZ Generation (MZ세대의 조절효과를 중심으로 한 ChatGPT의 사용의도에 관한 연구)

  • Yang-bum Jung;Jungmin Park;Hyoung-Yong Lee
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.111-127
    • /
    • 2023
  • This study is a study on user perception of ChatGPT use. The goal of this study is to analyze the relationship between user policy expectations and user innovativeness on ChatGPT technology acceptance and intention to use using variables of TRA (Theory of Reasoned Action). The impact of policy expectations and user innovativeness on the intention to use by mediating usefulness and hedonic motivation, and the impact of subjective norms on the usefulness and intention to use were analyzed by dividing them into the MZ generation and the non-MZ generation. It was verified whether there was a moderating effect on the effect of age differences on usefulness by interacting with policy expectations. An online survey was conducted on 300 ChatGPT users using PLS (Partial Least Square) structural equations and SPSS Package, and statistical analysis was performed using PLS and SPSS. According to the analysis results, it was confirmed that the higher the initial user's innovativeness, the higher the intention to use ChatGPT. In addition, the moderating effect analysis comparing the differences between the MZ generation and the non-MZ generation showed that policy expectations had a negative effect on the usefulness of ChatGPT use.

Multi-day Trip Planning System with Collaborative Recommendation (협업적 추천 기반의 여행 계획 시스템)

  • Aprilia, Priska;Oh, Kyeong-Jin;Hong, Myung-Duk;Ga, Myeong-Hyeon;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.159-185
    • /
    • 2016
  • Planning a multi-day trip is a complex, yet time-consuming task. It usually starts with selecting a list of points of interest (POIs) worth visiting and then arranging them into an itinerary, taking into consideration various constraints and preferences. When choosing POIs to visit, one might ask friends to suggest them, search for information on the Web, or seek advice from travel agents; however, those options have their limitations. First, the knowledge of friends is limited to the places they have visited. Second, the tourism information on the internet may be vast, but at the same time, might cause one to invest a lot of time reading and filtering the information. Lastly, travel agents might be biased towards providers of certain travel products when suggesting itineraries. In recent years, many researchers have tried to deal with the huge amount of tourism information available on the internet. They explored the wisdom of the crowd through overwhelming images shared by people on social media sites. Furthermore, trip planning problems are usually formulated as 'Tourist Trip Design Problems', and are solved using various search algorithms with heuristics. Various recommendation systems with various techniques have been set up to cope with the overwhelming tourism information available on the internet. Prediction models of recommendation systems are typically built using a large dataset. However, sometimes such a dataset is not always available. For other models, especially those that require input from people, human computation has emerged as a powerful and inexpensive approach. This study proposes CYTRIP (Crowdsource Your TRIP), a multi-day trip itinerary planning system that draws on the collective intelligence of contributors in recommending POIs. In order to enable the crowd to collaboratively recommend POIs to users, CYTRIP provides a shared workspace. In the shared workspace, the crowd can recommend as many POIs to as many requesters as they can, and they can also vote on the POIs recommended by other people when they find them interesting. In CYTRIP, anyone can make a contribution by recommending POIs to requesters based on requesters' specified preferences. CYTRIP takes input on the recommended POIs to build a multi-day trip itinerary taking into account the user's preferences, the various time constraints, and the locations. The input then becomes a multi-day trip planning problem that is formulated in Planning Domain Definition Language 3 (PDDL3). A sequence of actions formulated in a domain file is used to achieve the goals in the planning problem, which are the recommended POIs to be visited. The multi-day trip planning problem is a highly constrained problem. Sometimes, it is not feasible to visit all the recommended POIs with the limited resources available, such as the time the user can spend. In order to cope with an unachievable goal that can result in no solution for the other goals, CYTRIP selects a set of feasible POIs prior to the planning process. The planning problem is created for the selected POIs and fed into the planner. The solution returned by the planner is then parsed into a multi-day trip itinerary and displayed to the user on a map. The proposed system is implemented as a web-based application built using PHP on a CodeIgniter Web Framework. In order to evaluate the proposed system, an online experiment was conducted. From the online experiment, results show that with the help of the contributors, CYTRIP can plan and generate a multi-day trip itinerary that is tailored to the users' preferences and bound by their constraints, such as location or time constraints. The contributors also find that CYTRIP is a useful tool for collecting POIs from the crowd and planning a multi-day trip.

Accession of Korea to the Nagoya Protocol and its Economic Impact Analysis on Korean Bioindustry Companies (우리나라의 나고야의정서의 가입이 바이오산업에 미치는 경제적 영향 분석)

  • Park, Yong-Ha;Kim, Joon Sun;Choi, Hyun-Ah
    • Journal of Environmental Policy
    • /
    • v.11 no.4
    • /
    • pp.39-57
    • /
    • 2012
  • Analysis of the economic impact on Korean bioindustry companies was approached after Korea access to the Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity (hereinafter 'the Protocol') enters into force. Cost analysis of the economic impact is based on the size of bioindustry market, dependency ratio on genetic resources abroad, ABS (Access and Benefit Sharing) ratio for royalty ratio. Korean bioindustry companies would have had to pay extra ABS cost around 1.3-6.0 billion won for using genetic resources abroad, if the Protocol had entered into force in 2009. And this cost is estimated to be around 13.6 - 63.9 billion won in 2015. All ABS costs account only about less than 0.01% of total Korean bioindustry volume of target years. These show us that joining the Protocol will not significantly impact the bioindustry market in Korea. If the Protocol enters into force, genetic resources users have to pay PIC (Prior Informed Consent) and MAT (Mutually Agreed Terms) cost before accessing the genetic resources outside of their country, regardless of the accession status of the country. This ABS costs and terms on provided genetic resources will be determined by compliance between genetic resources users and providers. As a genetic resources provider, Korean bioindustry companies will have advantage over technology transfer agreements, royalties, licensing agreements, and taxes on profits from patents including traditional knowledge. Also, Korean bioindustry companies are expected to get various socio-economic benefits such as patent litigation and regulatory proceedings as a genetic resources provider. Considering the advantages and disadvantages of the Protocol that Korean bioindustry companies will face together, the socio-economic impact of the Nagoya Protocol on Korean bioindustry companies is negligible regardless of the accession status of Korea to the Nagoya Protocol.

  • PDF

A Comparative Case Study on the Adaptation Process of Advanced Information Technology: A Grounded Theory Approach for the Appropriation Process (신기술 사용 과정에 관한 비교 사례 연구: 기술 전유 과정의 근거이론적 접근)

  • Choi, Hee-Jae;Lee, Zoon-Ky
    • Asia pacific journal of information systems
    • /
    • v.19 no.3
    • /
    • pp.99-124
    • /
    • 2009
  • Many firms in Korea have adopted and used advanced information technology in an effort to boost efficiency. The process of adapting to the new technology, at the same time, can vary from one firm to another. As such, this research focuses on several relevant factors, especially the roles of social interaction as a key variable that influences the technology adaptation process and the outcomes. Thus far, how a firm goes through the adaptation process to the new technology has not been yet fully explored. Previous studies on changes undergone by a firm or an organization due to information technology have been pursued from various theoretical points of views, evolved from technological and institutional views to an integrated social technology views. The technology adaptation process has been understood to be something that evolves over time and has been regarded as cycles between misalignments and alignments, gradually approaching the stable aligned state. The adaptation process of the new technology was defined as "appropriation" process according to Poole and DeSanctis (1994). They suggested that this process is not automatically determined by the technology design itself. Rather, people actively select how technology structures should be used; accordingly, adoption practices vary. But concepts of the appropriation process in these studies are not accurate while suggested propositions are not clear enough to apply in practice. Furthermore, these studies do not substantially suggest which factors are changed during the appropriation process and what should be done to bring about effective outcomes. Therefore, research objectives of this study lie in finding causes for the difference in ways in which advanced information technology has been used and adopted among organizations. The study also aims to explore how a firm's interaction with social as well as technological factors affects differently in resulting organizational changes. Detail objectives of this study are as follows. First, this paper primarily focuses on the appropriation process of advanced information technology in the long run, and we look into reasons for the diverse types of the usage. Second, this study is to categorize each phases in the appropriation process and make clear what changes occur and how they are evolved during each phase. Third, this study is to suggest the guidelines to determine which strategies are needed in an individual, group and organizational level. For this, a substantially grounded theory that can be applied to organizational practice has been developed from a longitudinal comparative case study. For these objectives, the technology appropriation process was explored based on Structuration Theory by Giddens (1984), Orlikoski and Robey (1991) and Adaptive Structuration Theory by Poole and DeSanctis (1994), which are examples of social technology views on organizational change by technology. Data have been obtained from interviews, observations of medical treatment task, and questionnaires administered to group members who use the technology. Data coding was executed in three steps following the grounded theory approach. First of all, concepts and categories were developed from interviews and observation data in open coding. Next, in axial coding, we related categories to subcategorize along the lines of their properties and dimensions through the paradigm model. Finally, the grounded theory about the appropriation process was developed through the conditional/consequential matrix in selective coding. In this study eight hypotheses about the adaptation process have been clearly articulated. Also, we found that the appropriation process involves through three phases, namely, "direct appropriation," "cooperate with related structures," and "interpret and make judgments." The higher phases of appropriation move, the more users represent various types of instrumental use and attitude. Moreover, the previous structures like "knowledge and experience," "belief that other members know and accept the use of technology," "horizontal communication," and "embodiment of opinion collection process" are evolved to higher degrees in their dimensions of property. Furthermore, users continuously create new spirits and structures, while removing some of the previous ones at the same time. Thus, from longitudinal view, faithful and unfaithful appropriation methods appear recursively, but gradually faithful appropriation takes over the other. In other words, the concept of spirits and structures has been changed in the adaptation process over time for the purpose of alignment between the task and other structures. These findings call for a revised or extended model of structural adaptation in IS (Information Systems) literature now that the vague adaptation process in previous studies has been clarified through the in-depth qualitative study, identifying each phrase with accuracy. In addition, based on these results some guidelines can be set up to help determine which strategies are needed in an individual, group, and organizational level for the purpose of effective technology appropriation. In practice, managers can focus on the changes of spirits and elevation of the structural dimension to achieve effective technology use.

The Study on Risk Factors Analysis and Improvement of VDT Syndrome in Nuclear Medicine (핵의학과 Video Display Terminals Syndrome 유해 요인 조사 및 개선에 관한 연구)

  • Kim, Jung-Soo;Kim, Seung-Jeong;Lee, Hong-Jae;Kim, Jin-Eui;Kim, Hyun-Joo;Han, In-Im;Joo, Yung-Soo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.61-66
    • /
    • 2010
  • Purpose: Recently, Department of Nuclear Medicine have an interest in Video Display Terminals (VDT) syndrome including musculoskeletal disorders, ophthalmologic disorders, trouble of electromagnetic waves and stress disorders occur to VDT workers as the growing number of users and rapid pace of service period supply in large amount. This study research on the actual condition for VDT syndrome in Nuclear Medicine, Seoul National University Hospital (SNUH), discover the problem and draw a plan of upcoming improvement. The aim of this study establish awareness about VDT syndrome and is to prevent for it in the long run. Materials and Methods: Department of Nuclear Medicine, SNUH is composed Principle part, Pediatric part and PET center. We estimated risk factors visit in each part directly. Estimation method use "Check list for VDT work" of Wonjin working environment health laboratory and check list is condition of VDT work, condition of work tables, condition of chairs, condition of keyboards, condition of monitors, working position, character of health management and other working environment. Analysis result is verified in Department of Occupational and Environment, Hallym University Sacred Heard Hospital. Results: As a result of analysis, VDT condition of Department of Nuclear Medicine, SNUH is rule good. In case of work tables, recent of things are suitable to users upon the ergonomical planning, but 15% of existing work tables are below the standard value. In case of chairs are suitable, but 5% of theirs lost optimum capacity become superannuated. The keyboards are suitable for 98% of standard value. In case of monitors, angle control of screen is possible of all, but positioning control is impossible for 38%. In case of working position, 10% is fixed positioning for long time and some of the items researched unsuitable things for standard. At health management point, needed capable of improvement. Also, other working condition as lighting, temperature, noise and ventilation, discovered the problem, but is sufficient to advice value. Conclusion: VDT syndrome is occurrences of possibility continuously, come economical expensive about improvement, is inherent in various causes and originate without your knowledge. So, there is need systematic management system. In Nuclear Medicine, VDT syndrome make it better that constant interest and effort as improvement of ergonomical working environment, improvement of working procedure, regular exercise and steady stretching, and can be prevented fairly. This promote physical and mental condition of worker in top form in comfortable working environment, so this is judged by enlargement of operations efficiency and rising of satisfaction ratings of the inside client.

  • PDF

Applying Meta-model Formalization of Part-Whole Relationship to UML: Experiment on Classification of Aggregation and Composition (UML의 부분-전체 관계에 대한 메타모델 형식화 이론의 적용: 집합연관 및 복합연관 판별 실험)

  • Kim, Taekyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.99-118
    • /
    • 2015
  • Object-oriented programming languages have been widely selected for developing modern information systems. The use of concepts relating to object-oriented (OO, in short) programming has reduced efforts of reusing pre-existing codes, and the OO concepts have been proved to be a useful in interpreting system requirements. In line with this, we have witnessed that a modern conceptual modeling approach supports features of object-oriented programming. Unified Modeling Language or UML becomes one of de-facto standards for information system designers since the language provides a set of visual diagrams, comprehensive frameworks and flexible expressions. In a modeling process, UML users need to consider relationships between classes. Based on an explicit and clear representation of classes, the conceptual model from UML garners necessarily attributes and methods for guiding software engineers. Especially, identifying an association between a class of part and a class of whole is included in the standard grammar of UML. The representation of part-whole relationship is natural in a real world domain since many physical objects are perceived as part-whole relationship. In addition, even abstract concepts such as roles are easily identified by part-whole perception. It seems that a representation of part-whole in UML is reasonable and useful. However, it should be admitted that the use of UML is limited due to the lack of practical guidelines on how to identify a part-whole relationship and how to classify it into an aggregate- or a composite-association. Research efforts on developing the procedure knowledge is meaningful and timely in that misleading perception to part-whole relationship is hard to be filtered out in an initial conceptual modeling thus resulting in deterioration of system usability. The current method on identifying and classifying part-whole relationships is mainly counting on linguistic expression. This simple approach is rooted in the idea that a phrase of representing has-a constructs a par-whole perception between objects. If the relationship is strong, the association is classified as a composite association of part-whole relationship. In other cases, the relationship is an aggregate association. Admittedly, linguistic expressions contain clues for part-whole relationships; therefore, the approach is reasonable and cost-effective in general. Nevertheless, it does not cover concerns on accuracy and theoretical legitimacy. Research efforts on developing guidelines for part-whole identification and classification has not been accumulated sufficient achievements to solve this issue. The purpose of this study is to provide step-by-step guidelines for identifying and classifying part-whole relationships in the context of UML use. Based on the theoretical work on Meta-model Formalization, self-check forms that help conceptual modelers work on part-whole classes are developed. To evaluate the performance of suggested idea, an experiment approach was adopted. The findings show that UML users obtain better results with the guidelines based on Meta-model Formalization compared to a natural language classification scheme conventionally recommended by UML theorists. This study contributed to the stream of research effort about part-whole relationships by extending applicability of Meta-model Formalization. Compared to traditional approaches that target to establish criterion for evaluating a result of conceptual modeling, this study expands the scope to a process of modeling. Traditional theories on evaluation of part-whole relationship in the context of conceptual modeling aim to rule out incomplete or wrong representations. It is posed that qualification is still important; but, the lack of consideration on providing a practical alternative may reduce appropriateness of posterior inspection for modelers who want to reduce errors or misperceptions about part-whole identification and classification. The findings of this study can be further developed by introducing more comprehensive variables and real-world settings. In addition, it is highly recommended to replicate and extend the suggested idea of utilizing Meta-model formalization by creating different alternative forms of guidelines including plugins for integrated development environments.

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

Product Recommender Systems using Multi-Model Ensemble Techniques (다중모형조합기법을 이용한 상품추천시스템)

  • Lee, Yeonjeong;Kim, Kyoung-Jae
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.39-54
    • /
    • 2013
  • Recent explosive increase of electronic commerce provides many advantageous purchase opportunities to customers. In this situation, customers who do not have enough knowledge about their purchases, may accept product recommendations. Product recommender systems automatically reflect user's preference and provide recommendation list to the users. Thus, product recommender system in online shopping store has been known as one of the most popular tools for one-to-one marketing. However, recommender systems which do not properly reflect user's preference cause user's disappointment and waste of time. In this study, we propose a novel recommender system which uses data mining and multi-model ensemble techniques to enhance the recommendation performance through reflecting the precise user's preference. The research data is collected from the real-world online shopping store, which deals products from famous art galleries and museums in Korea. The data initially contain 5759 transaction data, but finally remain 3167 transaction data after deletion of null data. In this study, we transform the categorical variables into dummy variables and exclude outlier data. The proposed model consists of two steps. The first step predicts customers who have high likelihood to purchase products in the online shopping store. In this step, we first use logistic regression, decision trees, and artificial neural networks to predict customers who have high likelihood to purchase products in each product group. We perform above data mining techniques using SAS E-Miner software. In this study, we partition datasets into two sets as modeling and validation sets for the logistic regression and decision trees. We also partition datasets into three sets as training, test, and validation sets for the artificial neural network model. The validation dataset is equal for the all experiments. Then we composite the results of each predictor using the multi-model ensemble techniques such as bagging and bumping. Bagging is the abbreviation of "Bootstrap Aggregation" and it composite outputs from several machine learning techniques for raising the performance and stability of prediction or classification. This technique is special form of the averaging method. Bumping is the abbreviation of "Bootstrap Umbrella of Model Parameter," and it only considers the model which has the lowest error value. The results show that bumping outperforms bagging and the other predictors except for "Poster" product group. For the "Poster" product group, artificial neural network model performs better than the other models. In the second step, we use the market basket analysis to extract association rules for co-purchased products. We can extract thirty one association rules according to values of Lift, Support, and Confidence measure. We set the minimum transaction frequency to support associations as 5%, maximum number of items in an association as 4, and minimum confidence for rule generation as 10%. This study also excludes the extracted association rules below 1 of lift value. We finally get fifteen association rules by excluding duplicate rules. Among the fifteen association rules, eleven rules contain association between products in "Office Supplies" product group, one rules include the association between "Office Supplies" and "Fashion" product groups, and other three rules contain association between "Office Supplies" and "Home Decoration" product groups. Finally, the proposed product recommender systems provides list of recommendations to the proper customers. We test the usability of the proposed system by using prototype and real-world transaction and profile data. For this end, we construct the prototype system by using the ASP, Java Script and Microsoft Access. In addition, we survey about user satisfaction for the recommended product list from the proposed system and the randomly selected product lists. The participants for the survey are 173 persons who use MSN Messenger, Daum Caf$\acute{e}$, and P2P services. We evaluate the user satisfaction using five-scale Likert measure. This study also performs "Paired Sample T-test" for the results of the survey. The results show that the proposed model outperforms the random selection model with 1% statistical significance level. It means that the users satisfied the recommended product list significantly. The results also show that the proposed system may be useful in real-world online shopping store.

A Real-Time Stock Market Prediction Using Knowledge Accumulation (지식 누적을 이용한 실시간 주식시장 예측)

  • Kim, Jin-Hwa;Hong, Kwang-Hun;Min, Jin-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.109-130
    • /
    • 2011
  • One of the major problems in the area of data mining is the size of the data, as most data set has huge volume these days. Streams of data are normally accumulated into data storages or databases. Transactions in internet, mobile devices and ubiquitous environment produce streams of data continuously. Some data set are just buried un-used inside huge data storage due to its huge size. Some data set is quickly lost as soon as it is created as it is not saved due to many reasons. How to use this large size data and to use data on stream efficiently are challenging questions in the study of data mining. Stream data is a data set that is accumulated to the data storage from a data source continuously. The size of this data set, in many cases, becomes increasingly large over time. To mine information from this massive data, it takes too many resources such as storage, money and time. These unique characteristics of the stream data make it difficult and expensive to store all the stream data sets accumulated over time. Otherwise, if one uses only recent or partial of data to mine information or pattern, there can be losses of valuable information, which can be useful. To avoid these problems, this study suggests a method efficiently accumulates information or patterns in the form of rule set over time. A rule set is mined from a data set in stream and this rule set is accumulated into a master rule set storage, which is also a model for real-time decision making. One of the main advantages of this method is that it takes much smaller storage space compared to the traditional method, which saves the whole data set. Another advantage of using this method is that the accumulated rule set is used as a prediction model. Prompt response to the request from users is possible anytime as the rule set is ready anytime to be used to make decisions. This makes real-time decision making possible, which is the greatest advantage of this method. Based on theories of ensemble approaches, combination of many different models can produce better prediction model in performance. The consolidated rule set actually covers all the data set while the traditional sampling approach only covers part of the whole data set. This study uses a stock market data that has a heterogeneous data set as the characteristic of data varies over time. The indexes in stock market data can fluctuate in different situations whenever there is an event influencing the stock market index. Therefore the variance of the values in each variable is large compared to that of the homogeneous data set. Prediction with heterogeneous data set is naturally much more difficult, compared to that of homogeneous data set as it is more difficult to predict in unpredictable situation. This study tests two general mining approaches and compare prediction performances of these two suggested methods with the method we suggest in this study. The first approach is inducing a rule set from the recent data set to predict new data set. The seocnd one is inducing a rule set from all the data which have been accumulated from the beginning every time one has to predict new data set. We found neither of these two is as good as the method of accumulated rule set in its performance. Furthermore, the study shows experiments with different prediction models. The first approach is building a prediction model only with more important rule sets and the second approach is the method using all the rule sets by assigning weights on the rules based on their performance. The second approach shows better performance compared to the first one. The experiments also show that the suggested method in this study can be an efficient approach for mining information and pattern with stream data. This method has a limitation of bounding its application to stock market data. More dynamic real-time steam data set is desirable for the application of this method. There is also another problem in this study. When the number of rules is increasing over time, it has to manage special rules such as redundant rules or conflicting rules efficiently.

A Study on Advancing Strategy for National Environmental Geographic Informations - Focused on the National Environmental Assessment Map, Ecological Map and Land Cover Map - (국토환경지리정보 고도화 전략 연구 - 국토환경성평가지도, 생태자연도, 토지피복지도를 중심으로 -)

  • Lee, Chong-Soo
    • Journal of Environmental Policy
    • /
    • v.6 no.2
    • /
    • pp.97-122
    • /
    • 2007
  • In 2006, the Ministry of Environment of the Republic of Korea, completed the construction on national environmental geographic informations including National Environmental Assessment Map, Ecological Map, Land Cover Map and so on. At this point of time, it is necessary to establish the advance strategy on national environmental geographic information, considering the complicated characteristics. Therefore, this study suggests the advance strategy on national environmental geographic information, reflecting results of analyzing the given condition and the trend of informatization. National environmental geographic information has spacial quality to be managed dispersedly in a department unit or an operations unit. According to this quality, requirements for users who need the policy based on national environmental geographic information and complex information are not satisfactory. And, the information system centering the process of administrative affairs should be converted to one putting decision supporting first in importance. Therefore, this study sets up "the realization of the sustainable land management system by advancing national environmental geographic information" as the vision of the advancing strategy. In order to accomplish the vision, this study established the purpose as follow; constructing strategic and geographic information based on knowledge, arranging the foundation to open information to the public transparently, building expanded and integrated national environmental geographic information, embodying the environmental administration based on national environmental geographic information, enhancing the efficiency of national environmental geographic information, and supporting efficiently the process of administrative affairs. And this study suggests executive plans to achieve the vision and the purpose as next; developing the quality control program to verify the information confidence, building the system to integrate and to provide environmental information, collecting information, readjusting laws and regimes in parts of the construction, the application and the management of the system, and operating the task process, human power, organization and information technology. This study puts the emphasis on providing the turning opportunity politically which is possible to make sure of the information confidence in quality, advancing from the expansion of one in quantity. However, this improvement strategy doesn't reflect all national environmental geographic information and current status of environmental administrations. Therefore, for applying the result of this study to the actual environmental administration, it is necessary to discuss regularly the systematic categorization of national environmental geographic information, to interview with the contracting parties and so on hereafter.

  • PDF