• Title/Summary/Keyword: two order rule

Search Result 312, Processing Time 0.024 seconds

FLOW SHOP SCHEDULING JOBS WITH POSITION-DEPENDENT PROCESSING TIMES

  • WANG JI-BO
    • Journal of applied mathematics & informatics
    • /
    • v.18 no.1_2
    • /
    • pp.383-391
    • /
    • 2005
  • The paper is devoted to some flow shop scheduling problems, where job processing times are defined by functions dependent on their positions in the schedule. An example is constructed to show that the classical Johnson's rule is not the optimal solution for two different models of the two-machine flow shop scheduling to minimize makespan. In order to solve the makespan minimization problem in the two-machine flow shop scheduling, we suggest Johnson's rule as a heuristic algorithm, for which the worst-case bound is calculated. We find polynomial time solutions to some special cases of the considered problems for the following optimization criteria: the weighted sum of completion times and maximum lateness. Some furthermore extensions of the problems are also shown.

Effect of Market Basket Size on the Accuracy of Association Rule Measures (장바구니 크기가 연관규칙 척도의 정확성에 미치는 영향)

  • Kim, Nam-Gyu
    • Asia pacific journal of information systems
    • /
    • v.18 no.2
    • /
    • pp.95-114
    • /
    • 2008
  • Recent interests in data mining result from the expansion of the amount of business data and the growing business needs for extracting valuable knowledge from the data and then utilizing it for decision making process. In particular, recent advances in association rule mining techniques enable us to acquire knowledge concerning sales patterns among individual items from the voluminous transactional data. Certainly, one of the major purposes of association rule mining is to utilize acquired knowledge in providing marketing strategies such as cross-selling, sales promotion, and shelf-space allocation. In spite of the potential applicability of association rule mining, unfortunately, it is not often the case that the marketing mix acquired from data mining leads to the realized profit. The main difficulty of mining-based profit realization can be found in the fact that tremendous numbers of patterns are discovered by the association rule mining. Due to the many patterns, data mining experts should perform additional mining of the results of initial mining in order to extract only actionable and profitable knowledge, which exhausts much time and costs. In the literature, a number of interestingness measures have been devised for estimating discovered patterns. Most of the measures can be directly calculated from what is known as a contingency table, which summarizes the sales frequencies of exclusive items or itemsets. A contingency table can provide brief insights into the relationship between two or more itemsets of concern. However, it is important to note that some useful information concerning sales transactions may be lost when a contingency table is constructed. For instance, information regarding the size of each market basket(i.e., the number of items in each transaction) cannot be described in a contingency table. It is natural that a larger basket has a tendency to consist of more sales patterns. Therefore, if two itemsets are sold together in a very large basket, it can be expected that the basket contains two or more patterns and that the two itemsets belong to mutually different patterns. Therefore, we should classify frequent itemset into two categories, inter-pattern co-occurrence and intra-pattern co-occurrence, and investigate the effect of the market basket size on the two categories. This notion implies that any interestingness measures for association rules should consider not only the total frequency of target itemsets but also the size of each basket. There have been many attempts on analyzing various interestingness measures in the literature. Most of them have conducted qualitative comparison among various measures. The studies proposed desirable properties of interestingness measures and then surveyed how many properties are obeyed by each measure. However, relatively few attentions have been made on evaluating how well the patterns discovered by each measure are regarded to be valuable in the real world. In this paper, attempts are made to propose two notions regarding association rule measures. First, a quantitative criterion for estimating accuracy of association rule measures is presented. According to this criterion, a measure can be considered to be accurate if it assigns high scores to meaningful patterns that actually exist and low scores to arbitrary patterns that co-occur by coincidence. Next, complementary measures are presented to improve the accuracy of traditional association rule measures. By adopting the factor of market basket size, the devised measures attempt to discriminate the co-occurrence of itemsets in a small basket from another co-occurrence in a large basket. Intensive computer simulations under various workloads were performed in order to analyze the accuracy of various interestingness measures including traditional measures and the proposed measures.

A Multibit Tree Bitmap based Packet Classification (멀티 비트 트리 비트맵 기반 패킷 분류)

  • 최병철;이정태
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.3B
    • /
    • pp.339-348
    • /
    • 2004
  • Packet classification is an important factor to support various services such as QoS guarantee and VPN for users in Internet. Packet classification is a searching process for best matching rule on rule tables by employing multi-field such as source address, protocol, and port number as well as destination address in If header. In this paper, we propose hardware based packet classification algorithm by employing tree bitmap of multi-bit trio. We divided prefixes of searching fields and rule into multi-bit stride, and perform a rule searching with multi-bit of fixed size. The proposed scheme can reduce the access times taking for rule search by employing indexing key in a fixed size of upper bits of rule prefixes. We also employ a marker prefixes in order to remove backtracking during searching a rule. In this paper, we generate two dimensional random rule set of source address and destination address using routing tables provided by IPMA Project, and compare its memory usages and performance.

Statistical design of Shewhart control chart with runs rules (런 규칙이 혼합된 슈와르트 관리도의 통계적 설계)

  • Kim, Young-Bok;Hong, Jung-Sik;Lie, Chang-Hoon
    • Journal of Korean Society for Quality Management
    • /
    • v.36 no.3
    • /
    • pp.34-44
    • /
    • 2008
  • This research proposes a design method based on the statistical characteristics of the Shewhart control chart incorporated with 2 of 2 and 2 of 3 runs rules respectively. A Markov chain approach is employed in order to calculate the in-control and out-of-control average run lengths(ARL). Two different control limit coefficients for the Shewhart scheme and the runs rule scheme are derived simultaneously to minimize the out-of-control average run length subject to the reasonable in-control average run length. Numerical examples show that the statistical performance of the hybrid control scheme are superior to that of the original Shewhart control chart.

Modeling and Validation of Semantic Constraints for ebXML Business Process Specifications (ebXML 비즈니스 프로세스 명세를 위한 의미 제약의 모델링과 검증)

  • Kim, Jong-Woo;Kim, Hyoung-Do
    • Asia pacific journal of information systems
    • /
    • v.14 no.1
    • /
    • pp.79-100
    • /
    • 2004
  • As a part of ebXML(Electronic Business using eXtensible Markup Language) framework, BPSS(Business Process Specification Schema) has been provided to support the direct specification of the set of elements required to configure a runtime system in order to execute a set of ebXML business transactions. The BPS,' is available in two stand-alone representations, a UML version and an XML version. Due to the limitations of UML notations and XML syntax, however, current ebXML BPSS specification fails to specify formal semantic constraints completely. In this study, we propose a constraint classification scheme for the BPSS specification and describe how to formally represent those semantic constraints using OCL(Object Constraint Language). As a way to validate p Business Process Specification(BPS) with the formal semantic constraints, we suggest a rule-based approach to represent the formal constraints and demonstrate its detailed mechanism for applying the rule-based constraints to the BPS with a prototype implementation.

Optical bench design rule formulated by statistical design of experiment (통계적 실험 계획법을 이용한 광학 벤치 설계 규칙의 설정)

  • 박세근;이재영;이승걸
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2002.10a
    • /
    • pp.123-127
    • /
    • 2002
  • In order to set up the design rule of micro optical bench, optical coupling efficiencies of two sets of test benches are calculated. Simple linear connections of incoming and outgoing optical fibers with and without ball lenses are designed. Positional errors that are possible in actual fabrication processes are considered in the calculations and their tolerances are determined from 3dB conditions. For a simple fiber-to-fiber connection, the working distance is limited to $2.7\mu\textrm{m}$ and tilt error $5.8^{\circ}$. When ball lenses are located in front of each fiber, the working distance can be extended over $60\mu\textrm{m}$ , but the positional errors have the strong interaction among position parameters and thus should be considered simultaneously for tolerance design.

  • PDF

Constructive Methods of Fuzzy Rules for Function Approximation

  • Maeda, Michiharu;Miyajima, Hiromi
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1626-1629
    • /
    • 2002
  • This paper describes novel methods to construct fuzzy inference rules with gradient descent. The present methods have a constructive mechanism of the rule unit that is applicable in two parameters: the central value and the width of the membership function in the antecedent part. The first approach is to create the rule unit at the nearest position from the input space, for the central value of the membership function in the antecedent part. The second is to create the rule unit which has the minimum width, for the width of the membership function in the antecedent part. Experimental results are presented in order to show that the proposed methods are effective in difference on the inference error and the number of learning iterations.

  • PDF

Expert System for Intelligent Control-Based Job Scheduling in FMS (FMS 에서의 지능제어형 생산계획을 위한 전문가 시스템)

  • 정현호;이창훈;서기성;우광방
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.39 no.5
    • /
    • pp.527-537
    • /
    • 1990
  • This paper describes an intelligent control-based job scheduler, named ESIJOBS, for flexible manufacturing system. In order to construct rulebase of this system, traditional rules of job scheduling in FMS are examined and evaluated. This result and the repetitional simulations with graphic monitoring system are used to form the rulebase of ESIJOBS, which is composed of three parts:six part selection rules, four machine center selection rules, and twenty-one metarules. Appropriate scheduling rule sets are selected by this rulebase and manufacturing system status. The performances of all simulations are affected by random breakdowns of major FMS components during each simulation. Six criteria are used to evaluate the performance of each scheduling. The two modes of ESIJOBS are simulated and compared with combinational 24 rule-set simulations. In this comparison ESIJOBS dominated the other rule-set simulations and showed the most excellent performance particularly in three criteria.

FUZZY-FILTER-BASED APPROACH TO RESTORATION OF THE OLD MOVIES

  • Tomohisa-Hoshi;Takashi-Komatsu;Takahiro-Saito
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.29-34
    • /
    • 1999
  • We present a practical method for removing biotches and restoring their mission data. To detect blotches, we employ a robust approach of local analysis of spatiotemporal anisotropic brightness continuity Our approach uses first-order spatiotemporal directional derivatives to select the smoothest direction for each examined pixel, and puts out the incorruption probability that he examined pixel may not be corrupted by blotches. As the restoration filter, were employ a spatiotemporal fuzzy filter whose response is adaptively controlled according to a fuzzy rule defined by the incorruption probability. The fuzzy filter is composed of the two different filter of the identity filter and the spatiotemporal directional-weighted-mean filter, and will put out an intermediate value between the original input brightness and the directional-weighted-mean brightness. We design the fuzzy rule in advance by a standard supervised learning fuzzy rule in advance by a standard supervised learning method. The computer simulations are presented.

Time-discontinuous Galerkin quadrature element methods for structural dynamics

  • Minmao, Liao;Yupeng, Wang
    • Structural Engineering and Mechanics
    • /
    • v.85 no.2
    • /
    • pp.207-216
    • /
    • 2023
  • Three time-discontinuous Galerkin quadrature element methods (TDGQEMs) are developed for structural dynamic problems. The weak-form time-discontinuous Galerkin (TDG) statements, which are capable of capturing possible displacement and/or velocity discontinuities, are employed to formulate the three types of quadrature elements, i.e., single-field, single-field/least-squares and two-field. Gauss-Lobatto quadrature rule and the differential quadrature analog are used to turn the weak-form TDG statements into a system of algebraic equations. The stability, accuracy and numerical dissipation and dispersion properties of the formulated elements are examined. It is found that all the elements are unconditionally stable, the order of accuracy is equal to two times the element order minus one or two times the element order, and the high-order elements possess desired high numerical dissipation in the high-frequency domain and low numerical dissipation and dispersion in the low-frequency domain. Three fundamental numerical examples are investigated to demonstrate the effectiveness and high accuracy of the elements, as compared with the commonly used time integration schemes.