• Title/Summary/Keyword: Poisson process.

Search Result 484, Processing Time 0.026 seconds

(r, Q) Policy for Operation of a Multipurpose Facility (단일 범용설비 운영을 위한 (r, Q) 정책)

  • ;Oh, Geun-Tae
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.17 no.3
    • /
    • pp.27-46
    • /
    • 1992
  • This paper considers an (r, Q) policy for operation of a multipurpose facility. It is assumed that whenever the inventory level falls below r, the model starts to produce the fixed amount of Q. The facility can be utilized for extra production during idle periods, that is, when the inventory level is still greater than r right after a main production operation is terminated or an extra production operation is finished. But, whenever the facility is in operation for an extra production, the operation can not be terminated for the main production even though the inventory level falls below r. In the model, the demand for the product is assumed to arrive according to a compound Poisson process and the processing time required to produce a product is assumed to follow an arbitary distribution. Similarly, the orders for the extra production is assumed to accur in a Poisson process are the extra production processing time is assumed to follow an arbitrary distribution. It is further assumed that unsatisfied demands are backordered and the expected comulative amount of demands is less than that of production during each production period. Under a cost structure which includes a setup/ production cost, a linear holding cost, a linear backorder cost, a linear extra production lost sale cost, and a linear extra production profit, an expression for the expected cost per unit time for a given (r, Q) policy is obtained, and using a convex property of the cost function, a procedure to find the optimal (r, Q) policy is presented.

  • PDF

Mobile Device-to-Device (D2D) Content Delivery Networking: A Design and Optimization Framework

  • Kang, Hye Joong;Kang, Chung Gu
    • Journal of Communications and Networks
    • /
    • v.16 no.5
    • /
    • pp.568-577
    • /
    • 2014
  • We consider a mobile content delivery network (mCDN) in which special mobile devices designated as caching servers (caching-server device: CSD) can provide mobile stations with popular contents on demand via device-to-device (D2D) communication links. On the assumption that mobile CSD's are randomly distributed by a Poisson point process (PPP), an optimization problem is formulated to determine the probability of storing the individual content in each server in a manner that minimizes the average caching failure rate. Further, we present a low-complexity search algorithm, optimum dual-solution searching algorithm (ODSA), for solving this optimization problem. We demonstrate that the proposed ODSA takes fewer iterations, on the order of O(log N) searches, for caching N contents in the system to find the optimal solution, as compared to the number of iterations in the conventional subgradient method, with an acceptable accuracy in practice. Furthermore, we identify the important characteristics of the optimal caching policies in the mobile environment that would serve as a useful aid in designing the mCDN.

Risk Evaluation Based on the Time Dependent Expected Loss Model in FMEA (FMEA에서 시간을 고려한 기대손실모형에 기초한 위험 평가)

  • Kwon, Hyuck-Moo;Hong, Sung-Hoon;Lee, Min-Koo;Sutrisno, Agung
    • Journal of the Korean Society of Safety
    • /
    • v.26 no.6
    • /
    • pp.104-110
    • /
    • 2011
  • In FMEA, the risk priority number(RPN) is used for risk evaluation on each failure mode. It is obtained by multiplying three components, i.e., severity, occurrence, and detectability of the corresponding failure mode. Each of the three components are usually determined on the basis of the past experience and technical knowledge. But this approach is not strictly objective in evaluating risk of a given failure mode and thus provide somewhat less scientific measure of risk. Assuming a homogeneous Poisson process for occurrence of the failures and causes, we propose a more scientific approach to evaluation of risk in FMEA. To quantify severity of each failure mode, the mission period is taken into consideration for the system. If the system faces no failure during its mission period, there are no losses. If any failure occurs during its mission period, the losses corresponding to the failure mode incurs. A longer remaining mission period is assumed to incur a larger loss. Detectability of each failure mode is then incorporated into the model assuming an exponential probability law for detection time of each failure cause. Based on the proposed model, an illustrative example and numerical analyses are provided.

Performance analysis of satellite and terrestrial spectrum-shared networks with directional antenna

  • Yeom, Jeong Seon;Noh, Gosan;Chung, Heesang;Kim, Ilgyu;Jung, Bang Chul
    • ETRI Journal
    • /
    • v.42 no.5
    • /
    • pp.712-720
    • /
    • 2020
  • Recently, to make the best use of limited and precious spectrum resources, spectrum sharing between satellite and cellular networks has received much interest. In this study, we mathematically analyze the success probability of a fixed (satellite) earth station (FES) based on a stochastic geometry framework. Both the FES and base stations (BSs) are assumed to be equipped with a directional antenna, and the location and the number of BSs are modeled based on the Poisson point process. Furthermore, an exclusion zone is considered, in which the BSs are prohibited from locating in a circular zone with a certain radius around the FES to protect it from severe interference from the cellular BSs. We validate the analytical results on the success probability of the cognitive satellite-terrestrial network with directional antennas by comparing it using extensive computer simulations and show the effect of the exclusion zone on the success probability at the FES. It is shown that the exclusion zone-based interference mitigation technique significantly improves the success probability as the exclusion zone increases.

(Continuous-Time Queuing Model and Approximation Algorithm of a Packet Switch under Heterogeneous Bursty Traffic) (이질적 버스트 입력 트래픽 환경에서 패킷 교환기의 연속 시간 큐잉 모델과 근사 계산 알고리즘)

  • 홍석원
    • Journal of KIISE:Information Networking
    • /
    • v.30 no.3
    • /
    • pp.416-423
    • /
    • 2003
  • This paper proposes a continuous-time queuing model of a shared-buffer packet switch and an approximate algorithm. N arrival processes have heterogeneous busty traffic characteristics. The arrival processes are modeled by Coxian distribution with order 2 that is equivalent to Interruped Poisson Process. The service time is modeled by Erlang distribution with r stages. First the approximate algorithm performs the aggregation of N arrival processes as a single state variable. Next the algorithm discompose the queuing system into N subsystems which are represented by aggregated state variables. And the balance equations based on these aggregated state variables are solved for by iterative method. Finally the algorithm is validated by comparing the results with those of simulation.

Performance Analysis of Scheduling Rules in Semiconductor Wafer Fabrication (반도체 웨이퍼 제조공정에서의 스케줄링 규칙들의 성능 분석)

  • 정봉주
    • Journal of the Korea Society for Simulation
    • /
    • v.8 no.3
    • /
    • pp.49-66
    • /
    • 1999
  • Semiconductor wafer fabrication is known to be one of the most complex manufacturing processes due to process intricacy, random yields, product diversity, and rapid changing technologies. In this study we are concerned with the impact of lot release and dispatching policies on the performance of semiconductor wafer fabrication facilities. We consider several semiconductor wafer fabrication environments according to the machine failure types such as no failure, normal MTBF, bottleneck with low MTBF, high randomness, and high MTBF cases. Lot release rules to be considered are Deterministic, Poisson process, WR(Workload Regulation), SA(Starvation Avoidance), and Multi-SA. These rules are combined with several dispatching rules such as FIFO (First In First Out), SRPT (Shortest Remaining Processing Time), and NING/M(smallest Number In Next Queue per Machine). We applied the combined policies to each of semiconductor wafer fabrication environments. These policies are assessed in terms of throughput and flow time. Basically Weins fabrication setup was used to make the simulation models. The simulation parameters were obtained through the preliminary simulation experiments. The key results throughout the simulation experiments is that Multi-SA and SA are the most robust rules, which give mostly good performance for any wafer fabrication environments when used with any dispatching rules. The more important result is that for each of wafer fabrication environments there exist the best and worst choices of lot release and dispatching policies. For example, the Poisson release rule results in the least throughput and largest flow time without regard to failure types and dispatching rules.

  • PDF

A Study on the Software Reliability Model Analysis Following Exponential Type Life Distribution (지수 형 수명분포를 따르는 소프트웨어 신뢰모형 분석에 관한 연구)

  • Kim, Hee Cheul;Moon, Song Chul
    • Journal of Information Technology Applications and Management
    • /
    • v.28 no.4
    • /
    • pp.13-20
    • /
    • 2021
  • In this paper, I was applied the life distribution following linear failure rate distribution, Lindley distribution and Burr-Hatke exponential distribution extensively used in the arena of software reliability and were associated the reliability possessions of the software using the nonhomogeneous Poisson process with finite failure. Furthermore, the average value functions of the life distribution are non-increasing form. Case of the linear failure rate distribution (exponential distribution) than other models, the smaller the estimated value estimation error in comparison with the true value. In terms of accuracy, since Burr-Hatke exponential distribution and exponential distribution model in the linear failure rate distribution have small mean square error values, Burr-Hatke exponential distribution and exponential distribution models were stared as the well-organized model. Also, the linear failure rate distribution (exponential distribution) and Burr-Hatke exponential distribution model, which can be viewed as an effectual model in terms of goodness-of-fit because the larger assessed value of the coefficient of determination than other models. Through this study, software workers can use the design of mean square error, mean value function as a elementary recommendation for discovering software failures.

The Counting Process of Which the Intensity Function Depends on States

  • Park, Jeong-Hyun
    • Communications for Statistical Applications and Methods
    • /
    • v.4 no.1
    • /
    • pp.281-292
    • /
    • 1997
  • In this paper we are concered with the counting processes with intersity function $g_n(t)$, where $g_n(t)$ not only depends on t but n. It is shown that under certain conditions the number of events in [0, t] follows a generalizes Poisson distribution. A counting process is also provided such that $g_i(t)$$\neq$$g_i(t)$ for i$\neq$j and the number of events in [0, t] has a transformed geometric distribution.

  • PDF

Discounted Cost Model of Condition-Based Maintenance Regarding Cumulative Damage of Armor Units of Rubble-Mound Breakwaters as a Discrete-Time Stochastic Process (경사제 피복재의 누적피해를 이산시간 확률과정으로 고려한 조건기반 유지관리의 할인비용모형)

  • Lee, Cheol-Eung;Park, Dong-Heon
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.29 no.2
    • /
    • pp.109-120
    • /
    • 2017
  • A discounted cost model for preventive maintenance of armor units of rubble-mound breakwaters is mathematically derived by combining the deterioration model based on a discrete-time stochastic process of shock occurrence with the cost model of renewal process together. The discounted cost model of condition-based maintenance proposed in this paper can take into account the nonlinearity of cumulative damage process as well as the discounting effect of cost. By comparing the present results with the previous other results, the verification is carried out satisfactorily. In addition, it is known from the sensitivity analysis on variables related to the model that the more often preventive maintenance should be implemented, the more crucial the level of importance of system is. However, the tendency is shown in reverse as the interest rate is increased. Meanwhile, the present model has been applied to the armor units of rubble-mound breakwaters. The parameters of damage intensity function have been estimated through the time-dependent prediction of the expected cumulative damage level obtained from the sample path method. In particular, it is confirmed that the shock occurrences can be considered to be a discrete-time stochastic process by investigating the effects of uncertainty of the shock occurrences on the expected cumulative damage level with homogeneous Poisson process and doubly stochastic Poisson process that are the continuous-time stochastic processes. It can be also seen that the stochastic process of cumulative damage would depend directly on the design conditions, thus the preventive maintenance would be varied due to those. Finally, the optimal periods and scale for the preventive maintenance of armor units of rubble-mound breakwaters can be quantitatively determined with the failure limits, the levels of importance of structure, and the interest rates.

The Assessing Comparative Study for Statistical Process Control of Software Reliability Model Based on Logarithmic Learning Effects (대수형 학습효과에 근거한 소프트웨어 신뢰모형에 관한 통계적 공정관리 비교 연구)

  • Kim, Kyung-Soo;Kim, Hee-Cheul
    • Journal of Digital Convergence
    • /
    • v.11 no.12
    • /
    • pp.319-326
    • /
    • 2013
  • There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. Software error detection techniques known in advance, but influencing factors for considering the errors found automatically and learning factors, by prior experience, to find precisely the error factor setting up the testing manager are presented comparing the problem. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on infinite failure model and non-homogeneous Poisson Processes (NHPP). Statistical process control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper, we proposed a control mechanism based on NHPP using mean value function of logarithmic hazard learning effects property.