• 제목/요약/키워드: 2 of 2 runs rule

검색결과 16건 처리시간 0.022초

런 규칙이 혼합된 슈와르트 관리도의 통계적 설계 (Statistical design of Shewhart control chart with runs rules)

  • 김영복;홍정식;이창훈
    • 품질경영학회지
    • /
    • 제36권3호
    • /
    • pp.34-44
    • /
    • 2008
  • This research proposes a design method based on the statistical characteristics of the Shewhart control chart incorporated with 2 of 2 and 2 of 3 runs rules respectively. A Markov chain approach is employed in order to calculate the in-control and out-of-control average run lengths(ARL). Two different control limit coefficients for the Shewhart scheme and the runs rule scheme are derived simultaneously to minimize the out-of-control average run length subject to the reasonable in-control average run length. Numerical examples show that the statistical performance of the hybrid control scheme are superior to that of the original Shewhart control chart.

규칙을 가진 $\bar{X}$ 관리도에 관한 통람 ($\bar{X}$ Control Chart with Runs Rules: A Review)

  • 박진영;서순근
    • 품질경영학회지
    • /
    • 제40권2호
    • /
    • pp.176-185
    • /
    • 2012
  • After a work of Derman and Ross(1997) that considered simple main runs rules and derived ARL (Average Run Length) using Markov chain modeling, $\bar{X}$ control chart based on diverse alternative main and supplementary runs rules that is the most popular control chart for monitoring the mean of a process are proposed. This paper reviews and discusses the-state-of-art researches for these runs rules and classifies according to several properties of runs rules. ARL derivation for a proposed runs rule is also illustrated.

2중 2 런규칙을 사용한 공정이상 감지방법의 경제성 분석 (Economic Analysis for Detection of Out-of-Control of Process Using 2 of 2 Runs Rules)

  • 김영복;홍정식;이창훈
    • 대한산업공학회지
    • /
    • 제34권3호
    • /
    • pp.308-317
    • /
    • 2008
  • This research investigates economic characteristics of 2 of 2 runs rules under the Shewhart $\bar{X}$ control chart scheme. A Markov chain approach is employed in order to calculate the in-control average run length (ARL) and the average length of analysis cycle. States of the process are defined according to the process conditions at sampling time and transition probabilities are derived from the state definitions. A steady state cost function is constructed based on the Lorezen and Vance(1986) model. Numerical examples show that 2 of 2 runs rules are economically superior to the Shewhart $\bar{X}$ chart in many cases.

개선된 3 중 2 주 및 보조 런 규칙을 가진 X관리도의 통계적 설계 (Statistical Design of X Control Chart with Improved 2-of-3 Main and Supplementary Runs Rules)

  • 박진영;서순근
    • 품질경영학회지
    • /
    • 제40권4호
    • /
    • pp.467-480
    • /
    • 2012
  • Purpose: This paper introduces new 2-of-3 main and supplementary runs rules to increase the performance of the classical $\bar{X}$ control chart for detecting small process shifts. Methods: The proposed runs rules are compared with other competitive runs rules by numerical experiments. Nonlinear optimization problem to minimize the out-of-control ARL at a specified shift of process mean for determining action and warning limits at a time is formulated and a procedure to find two limits is illustrated with a numerical example. Results: The proposed 2-of-3 main and supplementary runs rules demonstrate an improved performance over other runs rules in detecting a sudden shift of process mean by simultaneous changes of mean and standard deviation. Conclusion: To increase the performance in the detection of small to moderate shifts, the proposed runs rules will be used with $\bar{X}$ control charts.

X Control Charts under the Second Order Autoregressive Process

  • Baik, Jai-Wook
    • 품질경영학회지
    • /
    • 제22권1호
    • /
    • pp.82-95
    • /
    • 1994
  • When independent individual measurements are taken both $S/c_4$ and $\bar{R}/d_2$ are unbiased estimators of the process standard deviation. However, with dependent data $\bar{R}/d_2$ is not an unbiased estimator of the process standard deviation. On the other hand $S/c_4$ is an asymptotic unbiased estimator. If there exists correlation in the data, positive(negative) correlation tends to increase(decrease) the ARL. The effect of using $\bar{R}/d_2$ is greater than $S/c_4$ if the assumption of independence is invalid. Supplementary runs rule shortens the ARL of X control charts dramatically in the presence of correlation in the data.

  • PDF

Classification Rule for Optimal Blocking for Nonregular Factorial Designs

  • Park, Dong-Kwon;Kim, Hyoung-Soon;Kang, Hee-Kyoung
    • Communications for Statistical Applications and Methods
    • /
    • 제14권3호
    • /
    • pp.483-495
    • /
    • 2007
  • In a general fractional factorial design, the n-levels of a factor are coded by the $n^{th}$ roots of the unity. Pistone and Rogantin (2007) gave a full generalization to mixed-level designs of the theory of the polynomial indicator function using this device. This article discusses the optimal blocking scheme for nonregular designs. According to hierarchical principle, the minimum aberration (MA) has been used as an important criterion for selecting blocked regular fractional factorial designs. MA criterion is mainly based on the defining contrast groups, which only exist for regular designs but not for nonregular designs. Recently, Cheng et al. (2004) adapted the generalized (G)-MA criterion discussed by Tang and Deng (1999) in studying $2^p$ optimal blocking scheme for nonregular factorial designs. The approach is based on the method of replacement by assigning $2^p$ blocks the distinct level combinations in the column with different blocks. However, when blocking level is not a power of two, we have no clue yet in any sense. As an example, suppose we experiment during 3 days for 12-run Plackett-Burman design. How can we arrange the 12-runs into the three blocks? To solve the problem, we apply G-MA criterion to nonregular mixed-level blocked scheme via the mixed-level indicator function and give an answer for the question.

외판원 문제의 지역 분할-연결 기법 (Travelling Salesman Problem Based on Area Division and Connection Method)

  • 이상운
    • 한국인터넷방송통신학회논문지
    • /
    • 제15권3호
    • /
    • pp.211-218
    • /
    • 2015
  • 본 논문은 외판원 문제의 해를 쉽게 구하는 알고리즘을 제안하였다. 사전에, n(n-1)개의 데이터에 대해 각 정점에서의 거리 오름차순으로 정렬시켜 최단거리 상위 10개인 10n개를 결정하였다. 첫 번째로, 각 정점 $v_i$의 최단거리인 $r_1=d\{v_i,v_j\}$로 연결된 부분경로를 하나의 지역으로 결정하였다. $r_2$에 대해서는 지역 내 정점간 간선은 무조건 연결하고, 지역간 간선은 연결 규칙을 적용하였다. 전체적으로 하나의 해밀턴 사이클이 형성될 때까지 $r_3$ 부터는 지역간 간선만 연결하는 방법으로 정복하였다. 따라서 제안된 방법은 지역분할정복 방법이라 할 수 있다. 실제 지도상의 도시들인 TSP-1(n=26) TSP-2(n=42)와 유클리드 평면상에 랜덤하게 생성된 TSP-3(n=50)에 대해 제안된 알고리즘을 적용한 결과 TSP-1과 TSP-2는 최적해를 구하였다. TSP-3에 대해서는 Valenzuela와 Jones의 결과보다 거리를 단축시킬 수 있었다. 전수탐색 방법은 n!인데 반해, 제안된 알고리즘의 수행복잡도는 $O(n^2)$이며, 수행횟수는 최대 10n이다.

Simple Online Multiple Human Tracking based on LK Feature Tracker and Detection for Embedded Surveillance

  • Vu, Quang Dao;Nguyen, Thanh Binh;Chung, Sun-Tae
    • 한국멀티미디어학회논문지
    • /
    • 제20권6호
    • /
    • pp.893-910
    • /
    • 2017
  • In this paper, we propose a simple online multiple object (human) tracking method, LKDeep (Lucas-Kanade feature and Detection based Simple Online Multiple Object Tracker), which can run in fast online enough on CPU core only with acceptable tracking performance for embedded surveillance purpose. The proposed LKDeep is a pragmatic hybrid approach which tracks multiple objects (humans) mainly based on LK features but is compensated by detection on periodic times or on necessity times. Compared to other state-of-the-art multiple object tracking methods based on 'Tracking-By-Detection (TBD)' approach, the proposed LKDeep is faster since it does not have to detect object on every frame and it utilizes simple association rule, but it shows a good object tracking performance. Through experiments in comparison with other multiple object tracking (MOT) methods using the public DPM detector among online state-of-the-art MOT methods reported in MOT challenge [1], it is shown that the proposed simple online MOT method, LKDeep runs faster but with good tracking performance for surveillance purpose. It is further observed through single object tracking (SOT) visual tracker benchmark experiment [2] that LKDeep with an optimized deep learning detector can run in online fast with comparable tracking performance to other state-of-the-art SOT methods.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1993년도 Fifth International Fuzzy Systems Association World Congress 93
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

대형교통사고 심각도 모형에 의한 주행안전성 및 투자효과 분석 (Analysis on the Driving Safety and Investment Effect using Severity Model of Fatal Traffic Accidents)

  • 임창식;최양원
    • 대한교통학회지
    • /
    • 제29권3호
    • /
    • pp.103-114
    • /
    • 2011
  • 본 연구는 2000년 이후 대형교통사고 발생지점 112건의 자료를 이용, 다양한 교차 및 빈도분석을 통해 대형교통사고와 도로 기하구조의 관계를 규명하고, 이를 토대로 대형교통사고 심각도 모형을 구축하였으며, 주행안전성 향상을 위해 720회의 컴퓨터 모의실험으로 다음과 같은 결론을 얻을 수 있었다. 첫째, 교차 및 빈도분석의 결과 커브구간에서 43.7%, 종단경사 기타조건에서 60.7%, 곡선반경 0~24m 구간에서 57.2%, 편경사 0.1~2.0% 구간에서 83.9%, 편도2차로 도로에서 49.1%, 차종별로는 승용(33.0%), 화물(20.5%), 버스(14.3%) 순이었으며, 편경사 설치 유 무가 대형교통사고 발생에 가장 큰 영향을 주는 것으로 분석되었다. 둘째, 순서형 프로빗 모형(Ordered Probit Model)을 이용하여 다양한 도로조건에서의 피해 예측이 가능한 대형교통사고 심각도 모형을 개발하였으며, 개발된 모형을 기반으로 도로의 위험성을 사전에 예측하고, 대책 마련이 가능토록 기여 하였다. 셋째, 컴퓨터 모의실험(Simulation) 결과, 이미 대형교통사고가 발생한 장소에 편경사를 설치했을 경우 약 85% 이상의 지점들에서 대형교통사고가 발생하지 않는 개선효과가 있는 것으로 분석되었으며, 이 분석결과를 통해 도로의 구조 시설 기준에 관한 규칙(해설 및 지침)의 편경사 설치 예외규정을 더욱더 강화시킬 필요가 있다고 사료된다.