• Title/Summary/Keyword: loss probability

Search Result 636, Processing Time 0.028 seconds

Flow Control with Hysteresis effect in ATM Network (ATM망의 히스테리시스 특성을 이용한 흐름제어기법)

  • 정상국;진용옥
    • Journal of the Korean Institute of Telematics and Electronics A
    • /
    • v.31A no.9
    • /
    • pp.10-17
    • /
    • 1994
  • In this paper, a priority schedling and a flow control algorithm with hysteresis effect are proposed for high-speed networks. A mathematical model for the flow control is proposed and a cell transition probability from this model is found. And the performance of the proposed algorithm is analyzed by a computer simulation. According to the simulation results, it can be shown that the priority scheduling and the flow control with hysteresis effect get the cell loss probability 0.061 better and the average delay 100ms better and the average delay 100ms beter than those of single threshold.

  • PDF

A Study on Process Capability Index using Reflected Normal Loss Function (역정규 손실함수를 이용한 공정능력지수에 관한 연구)

  • 정영배;문혜진
    • Journal of Korean Society for Quality Management
    • /
    • v.30 no.3
    • /
    • pp.66-78
    • /
    • 2002
  • Process capability indices are being used as indicators for measurements of process capability for SPC of quality assurance system in industries. In view of the enhancement of customer satisfaction, process capability indices in which loss functions are used to deal with the economic loss In the processes deviated from the target, are in an adequate representation of the customer's perception of quality In this connection, the loss function has become increasingly important in quality assurance. Taguchi uses a modified form of the quadratic loss function to demonstrate the need to consider the proximity to the target while assessing its quality. But this traditional quadratic loss function is inadequate to assessing the quality and quality improvement since different processes have different sets of economic consequences on the manufacturing, Thereby, a flexible approach to the development of the loss function needs to be desired. In this paper, we introduce an easily understood loss function, based on reflection of probability density function of the normal distribution. That is, the Reflected Normal Loss function can be adapted to an asymmetric loss as well as to a symmetric loss around the target. We propose that, instead of the process variation, a new capability index, CpI using the Reflected Normal Loss Function that can accurately reflect the losses associated with the process and a new capability index CpI Is compared with the classical indices as $C_{p}$ , $C_{pk}$, $C_{pm}$ and $C_{pm}$ $^{+}$.>.+/./.

Estimation and Sensitivity Analysis of Kinetic Parameters for Plasmid Stability in Continuous Culture of a Recombinant Escherichia coli Harboring trp-operon Plasmid

  • NAM, SOO WAN;BYUNG KWAN KIM;JUNG HOE KIM
    • Journal of Microbiology and Biotechnology
    • /
    • v.4 no.1
    • /
    • pp.13-19
    • /
    • 1994
  • A model equation to describe the plasmid instability in recombinant Escherichia coli fermentation is proposed. The equation allows one to estimate easily the two model parameters; (1) the difference in the specific growth rates between plasmid-free cells and plasmid-harboring cells ($\delta$), and (2) the probability of plasmid loss by plasmid-harboring cells ($\rho$). The estimated values of $\delta and \rho$ were in the range of 0.02-0.07 and $10^{-3}-10^{-5}$, respectively, and were strongly dependent on the dilution rate. As another parameter, the ratio of specific growth rates of plasmid-free cells and plasmid-harboring cells ($\alha$) was calculated and the result showed the highest value of 1.28 at the lowest dilution rate of 0.075 $hr^{-l}$, examined in this work. By the sensitivity analyses on the estimates of $\delta and \rho$, it was found that the growth rate difference ($\delta$) affected the plasmid instability more seriously than the probability of plasmid loss ($\rho$). Furthermore, the profound instability of plasmid at low dilution rate could be explained by the high values of $\alpha and \rho$.

  • PDF

Optimum Reserves in Vietnam Based on the Approach of Cost-Benefit for Holding Reserves and Sovereign Risk

  • TRAN, Thinh Vuong;LE, Thao Phan Thi Dieu
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.7 no.3
    • /
    • pp.157-165
    • /
    • 2020
  • This paper estimates the optimum level of reserves in Vietnam based on the approach of reserves' cost-benefit and sovereign risk which is one of developing countries' characteristics. The cost of reserves is the opportunity cost when holding reserves. The benefit of reserves is the loss due to country's default in case that there is no reserves to finance external debt payment. The optimum reserves is found out by minimizing the total of opportunity cost and loss due to country's default with the probability of default. Through the usage of HP Filter method for calculating the loss due to country's default, ARDL regression for the risk premium model and lending rate of VND as proxy for opportunity cost together with the Vietnamese economic data in the period of 2005 - 2017, the empirical results show that the optimum reserves in Vietnam is almost higher than the actual reserves during the research period except the point of Q3/2008 and the last point of research period - Q4/2017. Therefore, Vietnam should continue to increase reserves for safety but Vietnam does not need pushing quickly the speed of increasing reserves. In addition, controlling Vietnamese optimum reserves is necessary to help the actual reserves become reasonable.

A Packet Loss Concealment Algorithm Robust to Burst Packet Losses for G.729 (연속적인 프레임 손실에 강인한 G.729 프레임 손실 은닉 알고리즘)

  • Cho, Choong-Sang;Lee, Young-Han;Kim, Hong-Kook
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.307-310
    • /
    • 2007
  • In this paper, a packet loss concealment (PLC) algorithm for CELP-type speech coders is proposed to improve the quality of decoded speech under a burst packet loss condition. The proposed algorithm is based on the recovery of voiced excitation using an estimate of the voicing probability and the generation of random excitation by permutating the previously decoded excitation. The voicing probability is estimated from the correlation using the previous correctly decoded excitation and pitch. The proposed algorithm is implemented as a PLC algorithm for G.729 and its performance is compared with PLC employed in G.729 by means of perceptual evaluation of speech quality (PESQ) and an A-B preference test under the random and burst packet losses with rates of 3% and 5%. It is shown that the proposed algorithm provides better speech quality than the PLC of G.729, especially under burst pack losses.

  • PDF

Intensity measure-based probabilistic seismic evaluation and vulnerability assessment of ageing bridges

  • Yazdani, Mahdi;Jahangiri, Vahid
    • Earthquakes and Structures
    • /
    • v.19 no.5
    • /
    • pp.379-393
    • /
    • 2020
  • The purpose of this study is to first evaluate the seismic behavior of ageing arch bridges by using the Intensity Measure - based demand and DCFD format, which is referred to as the fragility-hazard format. Then, an investigation is performed for their seismic vulnerability. Analytical models are created for bridges concerning different features and these models are subjected to Incremental Dynamic Analysis (IDA) analysis using a set of 22 earthquake records. The hazard curve and results of IDA analysis are employed to evaluate the return period of exceeding the limit states in the IM-based probabilistic performance-based context. Subsequently, the fragility-hazard format is used to assess factored demand, factored capacity, and the ratio of the factored demand to the factored capacity of the models with respect to different performance objectives. Finally, the vulnerability curves are obtained for the investigated bridges in terms of the loss ratio. The results revealed that decreasing the span length of the unreinforced arch bridges leads to the increase in the return period of exceeding various limit states and factored capacity and decrease in the displacement demand, the probability of failure, the factored demand, as well as the factored demand to factored capacity ratios, loss ratio, and seismic vulnerability. Finally, it is derived that the probability of the need for rehabilitation increases by an increase in the span length of the models.

A call admission control in ATM networks using approximation technique for QOS estimation (ATM 망에서의 통화품질 평가를 위한 근사화 기법과 이를 이용한 호 수락 제어)

  • 안동명;한덕찬
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.23 no.9A
    • /
    • pp.2184-2196
    • /
    • 1998
  • Admission control is one of the most important congestion control mechanism to be executed at the call set up phase by regulating traffic into a network in a preventive way. An efficient QOS evaluation or bandwidth estimation method is required for call admission to be decided in real time. In this paper, we spropose a computtionally simple approximation method of estimating cell loss probability and mean cell delay for admission control of both delay sensitive and loss sensitive calls. Mixed input queueing system, where a new call combines with the existing traffic, is used as a queueing model for QOS estimation. Also traffic parameters are suggested to characterize both a new call and existing traffic. Aggregate traffic is approximated by a renewal process with these traffic parameters and then mean delay and cell loss probability are detemined using appropriate approximation formulas. The accuracy of this approximation approach is examined by comparing their results with exact analysis or simulation results of vrious mixed unput queueing systems. Based on this QOS estimation method, call admission control scheme which is traffic independent and computable in yeal time are proposed.

  • PDF

Determination of Control Limits of Conditional Variance Investigation: Application of Taguchi's Quality Loss Concept (조건부 차이조사의 관리한계 결정: 다구찌 품질손실 개념의 응용)

  • Pai, Hoo Seok;Lim, Chae Kwan
    • Journal of Korean Society for Quality Management
    • /
    • v.49 no.4
    • /
    • pp.467-482
    • /
    • 2021
  • Purpose: The main theme of this study is to determine the optimal control limit of conditional variance investigation by mathematical approach. According to the determination approach of control limit presented in this study, it is possible with only one parameter to calculate the control limit necessary for budgeting control system or standard costing system, in which the limit could not be set in advance, that's why it has the advantage of high practical application. Methods: This study followed the analytical methodology in terms of the decision model of information economics, Bayesian probability theory and Taguchi's quality loss function concept. Results: The function suggested by this study is as follows; ${\delta}{\leq}\frac{3}{2}(k+1)+\frac{2}{\frac{3}{2}(k+1)+\sqrt{\{\frac{3}{2}(k+1)\}^2}+4$ Conclusion: The results of this study will be able to contribute not only in practice of variance investigation requiring in the standard costing and budgeting system, but also in all fields dealing with variance investigation differences, for example, intangible services quality control that are difficult to specify tolerances (control limit) unlike tangible product, and internal information system audits where materiality standards cannot be specified unlike external accounting audits.

Effects of electronic energy deposition on pre-existing defects in 6H-SiC

  • Liao, Wenlong;He, Huan;Li, Yang;Liu, Wenbo;Zang, Hang;Wei, Jianan;He, Chaohui
    • Nuclear Engineering and Technology
    • /
    • v.53 no.7
    • /
    • pp.2357-2363
    • /
    • 2021
  • Silicon carbide is widely used in radiation environments due to its excellent properties. However, when exposed to the strong radiation environment constantly, plenty of defects are generated, thus causing the material performance downgrades or failures. In this paper, the two-temperature model (2T-MD) is used to explore the defect recovery process by applying the electronic energy loss (Se) on the pre-damaged system. The effects of defect concentration and the applied electronic energy loss on the defect recovery process are investigated, respectively. The results demonstrate that almost no defect recovery takes place until the defect density in the damage region or the local defect density is large enough, and the probability of defect recovery increases with the defect concentration. Additionally, the results indicate that the defect recovery induced by swift heavy ions is mainly connected with the homogeneous recombination of the carbon defects, while the probability of heterogeneous recombination is mainly dependent on the silicon defects.

Evaluation of a Solar Flare Forecast Model with Cost/Loss Ratio

  • Park, Jongyeob;Moon, Yong-Jae;Lee, Kangjin;Lee, Jaejin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.40 no.1
    • /
    • pp.84.2-84.2
    • /
    • 2015
  • There are probabilistic forecast models for solar flare occurrence, which can be evaluated by various skill scores (e.g. accuracy, critical success index, heidek skill score, true skill score). Since these skill scores assume that two types of forecast errors (i.e. false alarm and miss) are equal or constant, which does not take into account different situations of users, they may be unrealistic. In this study, we make an evaluation of a probabilistic flare forecast model (Lee et al. 2012) which use sunspot groups and its area changes as a proxy of flux emergence. We calculate daily solar flare probabilities from 1996 to 2014 using this model. Overall frequencies are 61.08% (C), 22.83% (M), and 5.44% (X). The maximum probabilities computed by the model are 99.9% (C), 89.39% (M), and 25.45% (X), respectively. The skill scores are computed through contingency tables as a function of forecast probability, which corresponds to the maximum skill score depending on flare class and type of a skill score. For the critical success index widely used, the probability threshold values for contingency tables are 25% (C), 20% (M), and 4% (X). We use a value score with cost/loss ratio, relative importance between the two types of forecast errors. We find that the forecast model has an effective range of cost/loss ratio for each class flare: 0.15-0.83(C), 0.11-0.51(M), and 0.04-0.17(X), also depending on a lifetime of satellite. We expect that this study would provide a guideline to determine the probability threshold for space weather forecast.

  • PDF