• Title/Summary/Keyword: If-then Rule

Search Result 214, Processing Time 0.021 seconds

The game of Rock-Paper Scissors between two teams (두 팀 간에 벌이는 가위바위보게임에 관한 연구)

  • Cho, Daehyeon
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.2
    • /
    • pp.277-289
    • /
    • 2019
  • We use a coin or the game of Rock-Paper Scissors before the main game to determine which team will begin first. And we can use effectively the game of Rock-Paper Scissors to choose one of the two or one of many. Two teams may consist of different number of players. In this paper we consider a rule that after each game if the winners consist of only one team then the team wins or the winners continue the next game till the winning team is decided. According to this rule we find the means and variances of the total number of games till the winning team is decided.

Context Prediction Using Right and Wrong Patterns to Improve Sequential Matching Performance for More Accurate Dynamic Context-Aware Recommendation (보다 정확한 동적 상황인식 추천을 위해 정확 및 오류 패턴을 활용하여 순차적 매칭 성능이 개선된 상황 예측 방법)

  • Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.19 no.3
    • /
    • pp.51-67
    • /
    • 2009
  • Developing an agile recommender system for nomadic users has been regarded as a promising application in mobile and ubiquitous settings. To increase the quality of personalized recommendation in terms of accuracy and elapsed time, estimating future context of the user in a correct way is highly crucial. Traditionally, time series analysis and Makovian process have been adopted for such forecasting. However, these methods are not adequate in predicting context data, only because most of context data are represented as nominal scale. To resolve these limitations, the alignment-prediction algorithm has been suggested for context prediction, especially for future context from the low-level context. Recently, an ontological approach has been proposed for guided context prediction without context history. However, due to variety of context information, acquiring sufficient context prediction knowledge a priori is not easy in most of service domains. Hence, the purpose of this paper is to propose a novel context prediction methodology, which does not require a priori knowledge, and to increase accuracy and decrease elapsed time for service response. To do so, we have newly developed pattern-based context prediction approach. First of ail, a set of individual rules is derived from each context attribute using context history. Then a pattern consisted of results from reasoning individual rules, is developed for pattern learning. If at least one context property matches, say R, then regard the pattern as right. If the pattern is new, add right pattern, set the value of mismatched properties = 0, freq = 1 and w(R, 1). Otherwise, increase the frequency of the matched right pattern by 1 and then set w(R,freq). After finishing training, if the frequency is greater than a threshold value, then save the right pattern in knowledge base. On the other hand, if at least one context property matches, say W, then regard the pattern as wrong. If the pattern is new, modify the result into wrong answer, add right pattern, and set frequency to 1 and w(W, 1). Or, increase the matched wrong pattern's frequency by 1 and then set w(W, freq). After finishing training, if the frequency value is greater than a threshold level, then save the wrong pattern on the knowledge basis. Then, context prediction is performed with combinatorial rules as follows: first, identify current context. Second, find matched patterns from right patterns. If there is no pattern matched, then find a matching pattern from wrong patterns. If a matching pattern is not found, then choose one context property whose predictability is higher than that of any other properties. To show the feasibility of the methodology proposed in this paper, we collected actual context history from the travelers who had visited the largest amusement park in Korea. As a result, 400 context records were collected in 2009. Then we randomly selected 70% of the records as training data. The rest were selected as testing data. To examine the performance of the methodology, prediction accuracy and elapsed time were chosen as measures. We compared the performance with case-based reasoning and voting methods. Through a simulation test, we conclude that our methodology is clearly better than CBR and voting methods in terms of accuracy and elapsed time. This shows that the methodology is relatively valid and scalable. As a second round of the experiment, we compared a full model to a partial model. A full model indicates that right and wrong patterns are used for reasoning the future context. On the other hand, a partial model means that the reasoning is performed only with right patterns, which is generally adopted in the legacy alignment-prediction method. It turned out that a full model is better than a partial model in terms of the accuracy while partial model is better when considering elapsed time. As a last experiment, we took into our consideration potential privacy problems that might arise among the users. To mediate such concern, we excluded such context properties as date of tour and user profiles such as gender and age. The outcome shows that preserving privacy is endurable. Contributions of this paper are as follows: First, academically, we have improved sequential matching methods to predict accuracy and service time by considering individual rules of each context property and learning from wrong patterns. Second, the proposed method is found to be quite effective for privacy preserving applications, which are frequently required by B2C context-aware services; the privacy preserving system applying the proposed method successfully can also decrease elapsed time. Hence, the method is very practical in establishing privacy preserving context-aware services. Our future research issues taking into account some limitations in this paper can be summarized as follows. First, user acceptance or usability will be tested with actual users in order to prove the value of the prototype system. Second, we will apply the proposed method to more general application domains as this paper focused on tourism in amusement park.

Two-Daughter Problem and Selection Effect (두 딸 문제와 선택 효과)

  • Kim, Myeongseok
    • Korean Journal of Logic
    • /
    • v.19 no.3
    • /
    • pp.369-400
    • /
    • 2016
  • If we learn that 'Mrs Lee has two children and at least one of them is a daughter', what is our credence that her two children are all girls? Obviously it is 1/3. By assuming some other obvious theses it seem to be argued that our credence is 1/2. Also by just supposing we learn trivial information about the future, it seem to be argued that we must change our credence 1/3 into 1/2. However all of these arguments are fallacious, cannot be sound. When using the conditionalization rule to evaluate conformation of a hypothesis by an evidence, or to estimate credence change by information intake, there are some points to keep in mind. We must examine whether relevant information was given through a random procedure or a biased procedure. If someone with full information releases to us particular partial information, an observation, a testimony, an evidence selected intentionally by him, which means the particular partial information was not given by chance, or was not given accidentally or naturally to us, then the conditionalization rule should be employed very cautiously or restrictedly.

  • PDF

A Study on Developing a Knowledge-based Database Program for Gas Facility Accident Analysis (가스시설 사고원인 해석을 위한 지식 데이터베이스 프로그램 개발)

  • Kim Min Seop;Im Cha Soon;Lee Jin Han;Park Kyo Shik;Ko Jae Wook
    • Journal of the Korean Institute of Gas
    • /
    • v.4 no.4 s.12
    • /
    • pp.65-70
    • /
    • 2000
  • We develop the database program for accident cause analysis which can help to increase domestic safety custom and prevent recurrence of gas accident and analyze accidents easily The program developed in this study consists of two parts. one part uses accident case database applied if than rule, so it finds root causes by inference of some input values. The other uses Root Cause Analysis Map which divided human errors and equipment difficulties and so we get general root cause by reply some proper questions.

  • PDF

An Interpretable Log Anomaly System Using Bayesian Probability and Closed Sequence Pattern Mining (베이지안 확률 및 폐쇄 순차패턴 마이닝 방식을 이용한 설명가능한 로그 이상탐지 시스템)

  • Yun, Jiyoung;Shin, Gun-Yoon;Kim, Dong-Wook;Kim, Sang-Soo;Han, Myung-Mook
    • Journal of Internet Computing and Services
    • /
    • v.22 no.2
    • /
    • pp.77-87
    • /
    • 2021
  • With the development of the Internet and personal computers, various and complex attacks begin to emerge. As the attacks become more complex, signature-based detection become difficult. It leads to the research on behavior-based log anomaly detection. Recent work utilizes deep learning to learn the order and it shows good performance. Despite its good performance, it does not provide any explanation for prediction. The lack of explanation can occur difficulty of finding contamination of data or the vulnerability of the model itself. As a result, the users lose their reliability of the model. To address this problem, this work proposes an explainable log anomaly detection system. In this study, log parsing is the first to proceed. Afterward, sequential rules are extracted by Bayesian posterior probability. As a result, the "If condition then results, post-probability" type rule set is extracted. If the sample is matched to the ruleset, it is normal, otherwise, it is an anomaly. We utilize HDFS datasets for the experiment, resulting in F1score 92.7% in test dataset.

A Study on Identification of Optimal Fuzzy Model Using Genetic Algorithm (유전알고리즘을 이용한 최적 퍼지모델의 동정에 관한연구)

  • 김기열
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.10 no.2
    • /
    • pp.138-145
    • /
    • 2000
  • A identification algorithm that finds optimal fuzzy membership functions and rule base to fuzzy model isproposed and a fuzzy controller is designed to get more accurate position and velocity control of wheeled mobile robot. This procedure that is composed of three steps has its own unique process at each step. The elements of output term set are increased at first step and then the rule base is varied according to increase of the elements. The adjusted system is in competition with system which doesn't include any increased elements. The adjusted system will be removed if the system lost. Otherwise, the control system is replaced with the adjusted system. After finished regulation of output term set and rule base, searching for input membership functions is processed with constraints and fine tuning of output membership functions is done.

  • PDF

Non-linear regression model considering all association thresholds for decision of association rule numbers (기본적인 연관평가기준 전부를 고려한 비선형 회귀모형에 의한 연관성 규칙 수의 결정)

  • Park, Hee Chang
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.2
    • /
    • pp.267-275
    • /
    • 2013
  • Among data mining techniques, the association rule is the most recently developed technique, and it finds the relevance between two items in a large database. And it is directly applied in the field because it clearly quantifies the relationship between two or more items. When we determine whether an association rule is meaningful, we utilize interestingness measures such as support, confidence, and lift. Interestingness measures are meaningful in that it shows the causes for pruning uninteresting rules statistically or logically. But the criteria of these measures are chosen by experiences, and the number of useful rules is hard to estimate. If too many rules are generated, we cannot effectively extract the useful rules.In this paper, we designed a variety of non-linear regression equations considering all association thresholds between the number of rules and three interestingness measures. And then we diagnosed multi-collinearity and autocorrelation problems, and used analysis of variance results and adjusted coefficients of determination for the best model through numerical experiments.

A Multi-Phase Decision Making Model for Supplier Selection Under Supply Risks (공급 리스크를 고려한 공급자 선정의 다단계 의사결정 모형)

  • Yoo, Jun-Su;Park, Yang-Byung
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.40 no.4
    • /
    • pp.112-119
    • /
    • 2017
  • Selecting suppliers in the global supply chain is the very difficult and complicated decision making problem particularly due to the various types of supply risk in addition to the uncertain performance of the potential suppliers. This paper proposes a multi-phase decision making model for supplier selection under supply risks in global supply chains. In the first phase, the model suggests supplier selection solutions suitable to a given condition of decision making using a rule-based expert system. The expert system consists of a knowledge base of supplier selection solutions and an "if-then" rule-based inference engine. The knowledge base contains information about options and their consistency for seven characteristics of 20 supplier selection solutions chosen from articles published in SCIE journals since 2010. In the second phase, the model computes the potential suppliers' general performance indices using a technique for order preference by similarity to ideal solution (TOPSIS) based on their scores obtained by applying the suggested solutions. In the third phase, the model computes their risk indices using a TOPSIS based on their historical and predicted scores obtained by applying a risk evaluation algorithm. The evaluation algorithm deals with seven types of supply risk that significantly affect supplier's performance and eventually influence buyer's production plan. In the fourth phase, the model selects Pareto optimal suppliers based on their general performance and risk indices. An example demonstrates the implementation of the proposed model. The proposed model provides supply chain managers with a practical tool to effectively select best suppliers while considering supply risks as well as the general performance.

A Study on the Rule for Creation of the Pattern Language of Christopher Alexander (크리스토퍼 알렉산더의 패턴언어 생성규칙에 관한 연구)

  • Jung, Sung-Wook;Kim, Moon-Duck
    • Korean Institute of Interior Design Journal
    • /
    • v.26 no.1
    • /
    • pp.75-82
    • /
    • 2017
  • This study reviews the process of creating the patterns through the Christopher Alexander's books to discover the fundamental rules for creation of the pattern language. The essential ideas of 11 rules describing the characteristics of the pattern language are organized by keyword depending on the characteristics of each rule. Then, this study analyzes which keyword was applied importantly and how it had been developed chronologically in the Alexander's books. As a result, 5 keywords - reflection of cultural difference, reflection of human desires, solving the repeated problem, function suitable for principal purpose, and network structure - are applied to his early books in which the pattern language was theoretically developed, the pattern of traditional society was discovered and the network structure was developed. Another 5 keywords - user participation method, new problem solving, structure preserving transformation, post-mechanization method, and central invariant structure - are applied to the books in his mid-term after completion of the pattern theory which discover new pattern for contemporary society and apply the pattern language to time and space. In his later books which organize the theory of pattern language and suggest the direction for using the pattern language, 5 keywords - wholeness, post-mechanization method, user participation method, new problem solving, and structure preserving transformation - are applied. Users may use the pattern language more precisely if he/she considers the keywords of the early period in searching the patterns of existing environment, the keywords of the intermediate period in searching the patterns of new environment or in regard to time and space, and the keywords of the later period in considering direction of the application of the pattern language.

Evaluation of The Neck Mass (경부종물의 진단)

  • Song, Kei-Won;Yoon, Seok-Keun;Choi, Byung-Heun
    • Journal of Yeungnam Medical Science
    • /
    • v.3 no.1
    • /
    • pp.1-11
    • /
    • 1986
  • As public awareness of the various warning signs of malignancy increases, so does the concern evoked by the self identified finding of mass in the head and neck area. Not all the palpable masses are always significantly abnormal, but any nontender mass especially to the adult is significant enough to warrent further full investigation and follow up, the object of which should be to determine the possibility of malignancy and urgency of treatment. Approach to the diagnosis of the neck mass is so important in that it affects decision regarding further evaluation would lead to the determination of the most efficacious mode of therapy, eventually to the good prognosis. So, it should be emphasized that approach to the diagnosis of neck mass should be planned, systematic and thorough, this begins with the taking careful history following performance of complete examination of the head and neck especially to the nasopharynx, tongue base, pyriform sinus, palatine tonsil and larynx. Then a number of laboratory and radiologic studies are available, following triple endoscopy under general anesthesia and blind biopsy if needed. The most important rule to keep is that any biopsy procedures should be delayed to the last modality of effort to the diagnosis and if it should be done, under the plan of radical neck dissection.

  • PDF