• Title/Summary/Keyword: Probability Item

Search Result 102, Processing Time 0.02 seconds

A Study on the Importance and order of priority of the Major control item for DMSMS by using AHP analysis (AHP 분석을 통한 부품단종 주요관리항목 중요도 및 우선순위에 관한 연구)

  • Moon, Jayoung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.10
    • /
    • pp.48-54
    • /
    • 2020
  • DMSMS (Diminishing Manufacturing Sources and Material Shortage) is increased by developing the scientific technique and downsizing the military part market. DMSMS affects the increase in total life cycle costs and serviceability. Therefore, advance control for parts is important to reduce the cost, and a database is needed to share information on the DMSMS. A task needs to be performed continuously by setting the major control item to management more efficiently. The purpose of this study was to deduce the major control item for the DMSMS management system. Thus, the pre-control item basis of the DAPA (Defense Acquisition Program Administration) Manual and the SD-22 Manual were first selected, and the results of the survey were analyzed by AHP (Analytic Hierarchy Process) method. Fifteen of the detailed items were stratified into three criteria (Impact, Probability, and cost of the DMSMS), and each weight for the items was calculated using a nine-point scale survey. The AHP survey was executed with 25 specialists in the DMSMS management field, and the score of consistency ratio over 0.1 was excluded. The model explained the results and suggested future directions for development.

Estimation for the Change of Daily Maxima Temperature (일일 최고기온의 변화에 대한 추정)

  • Ko, Wang-Kyung
    • The Korean Journal of Applied Statistics
    • /
    • v.20 no.1
    • /
    • pp.1-9
    • /
    • 2007
  • This investigation on the change of the daily maxima temperature in Seoul, Daegu, Chunchen, Youngchen was triggered by news items such as the earth is getting warmer and a recent news item that said that Korea is getting warmer due to this climatic change. A statistical analysis on the daily maxima for June over this period in Seoul revealed a positive trend of 1.1190 centigrade over the 45 years, a change of 0.0249 degrees annually. Due to the large variation on these maximum temperatures, one can raise the question on the significance of this increase. To check the goodness of fit of the proposed extreme value model, we shown a Q-Q plot of the observed quantiles against the simulated quantiles and a probability plot. And we calculated statistics each month and a tolerance limit. This is tested through simulating a large number of similar datasets from an Extreme Value distribution which described the observed data very well. Only 0.02% of the simulated datasets showed an increase of this degrees or larger, meaning that the probability is very low for such an event to occur.

Approximate Approach to Calculating the Order Fill Rate under Purchase Dependence (구매종속성이 존재하는 상황에서 주문충족율을 계산하는 근사법에 관한 연구)

  • Park, Changkyu;Seo, Junyong
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.41 no.2
    • /
    • pp.35-51
    • /
    • 2016
  • This paper proposes a new approximate approach to calculate the order fill rate and the probability of filling an entire customer order immediately from the shelf in a business environment under purchase dependence characterized by customer purchase patterns observed in such areas as marketing, manufacturing systems, and distribution systems. The new approximate approach divides customer orders into item orders and calculates fill rates of all order types to approximate the order fill rate. We develop a greed iterative search algorithm (GISA) based on the Gauss-Seidel method to avoid dimensionality and prevent the solution divergence for larger instances. Through the computational analysis that compares the GISA with the simulation, we demonstrate that the GISA is a dependable algorithm for deriving the stationary joint distribution of on-hand inventories in the type-K pure system. We also present some managerial insights.

CLASSIFICATION FUNCTIONS FOR EVALUATING THE PREDICTION PERFORMANCE IN COLLABORATIVE FILTERING RECOMMENDER SYSTEM

  • Lee, Seok-Jun;Lee, Hee-Choon;Chung, Young-Jun
    • Journal of applied mathematics & informatics
    • /
    • v.28 no.1_2
    • /
    • pp.439-450
    • /
    • 2010
  • In this paper, we propose a new idea to evaluate the prediction accuracy of user's preference generated by memory-based collaborative filtering algorithm before prediction process in the recommender system. Our analysis results show the possibility of a pre-evaluation before the prediction process of users' preference of item's transaction on the web. Classification functions proposed in this study generate a user's rating pattern under certain conditions. In this research, we test whether classification functions select users who have lower prediction or higher prediction performance under collaborative filtering recommendation approach. The statistical test results will be based on the differences of the prediction accuracy of each user group which are classified by classification functions using the generative probability of specific rating. The characteristics of rating patterns of classified users will also be presented.

Performance Evaluation of a Multi - Item Production System Operated by the CONWIP Control Mechanism (CONWIP 통제방식에 의해 운영되는 다품목 생산시스템의 성능평가)

  • Park, Chan-Woo;Lee, Hyo-Seong;Kim, Chang-Gon
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.28 no.1
    • /
    • pp.1-13
    • /
    • 2002
  • We study a multi-component production/inventory system in which individual components are made to meet various demand types. We assume that the demands arrive according to a Poisson process, but there is a fixed probability that a demand requests a particular kit of different components. Each component is produced by a flow line with several stations. The production of each component is operated by the CONWIP control mechanism. To analyse this system, we propose an approximation method based on aggregation method. In application of the aggregation method, a product-form approximation technique as well as a matrix-geometric method is used. Comparisons with simulation show that the approximation method provides fairly good results.

Design of a Curtailed-SPRT Control Chart (단축-축차관리도의 설계)

  • Chang, Young-Soon
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.34 no.1
    • /
    • pp.29-37
    • /
    • 2009
  • This paper proposes a curtailed-sequential probability ratio test (SPRT) control chart. For using the conventional SPRT control chart, the number of items inspected in a sampling point should have no restriction since items in a sampling point are inspected one by one until an SPRT Is terminated. The number of observations taken in a sampling point, however, has an upper bound since sampling and testing of an item is time-consuming or expensive. When the sample size reaches the upper bound without evidence of an in-control or out-of-control state of a process, the proposed chart makes a decision using the sample mean of all observations taken in a sampling point. The properties of the Proposed chart are obtained by a Markov chain approach and the performance of the chart is compared with fixed sample size (FSS) and variable sample size (VSS) control charts. A comparative study shows that the proposed chart performs better than VSS control charts as well as conventional FSS control charts.

On the Optimality of (s, S) Inventory Policy with Loss Cost (손실비용을 고려한 (s, S) 재고정책)

  • 최진영
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.18 no.34
    • /
    • pp.61-67
    • /
    • 1995
  • Through the model presented in this paper, we study on the depletion of stock taking place due to random loss of items as well as random demand, under the assumption that the distributions of demand are independent of those of loss, and both of them are identical, and that life time distribution of each item is negative exponential. The steady state probability distribution of the stock level assuming instantaneous delivery of order under (s, S) inventory policy. Also we have derived total expected cost expression with loss cost. The results of sensitive analysis show that the effect of loss rate is substantial on the total cost and optimal value of inventory level.

  • PDF

The Influence of Change Prevalence on Visual Short-Term Memory-Based Change Detection Performance (변화출현확률이 시각단기기억 기반 변화탐지 수행에 미치는 영향)

  • Son, Han-Gyeol;Hyun, Joo-Seok
    • Korean Journal of Cognitive Science
    • /
    • v.32 no.3
    • /
    • pp.117-139
    • /
    • 2021
  • The way of change detection in which presence of a different item is determined between memory and test arrays with a brief in-between time interval resembles how visual search is done considering that the different item is searched upon the onset of a test array being compared against the items in memory. According to the resemblance, the present study examined whether varying the probability of change occurrence in a visual short-term memory-based change detection task can influence the aspect of response-decision making (i.e., change prevalence effect). The simple-feature change detection task in the study consisted of a set of four colored boxes followed by another set of four colored boxes between which the participants determined presence or absence of a color change from one box to the other. The change prevalence was varied to 20, 50, or 80% in terms of change occurrences in total trials, and their change detection errors, detection sensitivity, and their subsequent RTs were analyzed. The analyses revealed that as the change prevalence increased, false alarms became more frequent while misses became less frequent, along with delayed correct-rejection responses. The observed change prevalence effect looks very similar to the target prevalence effect varying according to probability of target occurrence in visual search tasks, indicating that the background principles deriving these two effects may resemble each other.

A Model-based Test Approach and Case Study for Weapon Control System (모델기반 테스트 기법 및 무장통제장치 적용 사례)

  • Bae, Jung Ho;Jang, Bucheol;Koo, Bongjoo
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.20 no.5
    • /
    • pp.688-699
    • /
    • 2017
  • Model-based test, a well-known method of the black box tests, is consisted of the following four steps : model construction using requirement, test case generation from the model, execution of a SUT (software under test) and detection failures. Among models constructed in the first step, state-based models such as UML standard State Machine are commonly used to design event-based embedded systems (e.g., weapon control systems). To generate test cases from state-based models in the next step, coverage-based techniques such as state coverage and transition coverage are used. Round-trip path coverage technique using W-Method, one of coverage-based techniques, is known as more effective method than others. However it has a limitation of low failure observability because the W-Method technique terminates a testing process when arrivals meet states already visited and it is hard to decide the current state is completely same or not with the previous in the case like the GUI environment. In other words, there can exist unrevealed faults. Therefore, this study suggests a Extended W-Method. The Extended W-Method extends the round-trip path to a final state to improve failure observability. In this paper, we compare effectiveness and efficiency with requirement-item-based technique, W-Method and our Extended W-Method. The result shows that our technique can detect five and two more faults respectively and has the performance of 28 % and 42 % higher failure detection probability than the requirement-item-based and W-Method techniques, respectively.

Standardization for basic association measures in association rule mining (연관 규칙 마이닝에서의 평가기준 표준화 방안)

  • Park, Hee-Chang
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.5
    • /
    • pp.891-899
    • /
    • 2010
  • Association rule is the technique to represent the relationship between two or more items by numerical representing for the relevance of each item in vast amounts of databases, and is most being used in data mining. The basic thresholds for association rule are support, confidence, and lift. these are used to generate the association rules. We need standardization of lift because the range of lift value is different from that of support and confidence. And also we need standardization of support and confidence to compare objectively association level of antecedent variables for one descendant variable. In this paper we propose a method for standardization of association thresholds considering marginal probability for each item to grasp objectively and exactly association level, check the conditions for association criteria and then compare association thresholds with standardized association thresholds using some concrete examples.