• Title/Summary/Keyword: Mean Value Function

Search Result 599, Processing Time 0.028 seconds

Optimization of Process Capability Index by Loss Function of Taguchi (다구찌의 손실함수(損失函數)를 이용한 공정능력지수(工程能力指數)의 최적화(最適化)에 관한 연구(硏究))

  • Gu, Bon-Cheol;Song, Dan-Il
    • Journal of Korean Society for Quality Management
    • /
    • v.20 no.1
    • /
    • pp.80-90
    • /
    • 1992
  • In industries, the capability indices $C_p$ and $C_{pk}$ can be used to provide measures of process potential capability and performance, respectively. The new approach advocated by Taguchi in quality control overcomes some problems in other approaches preventive management activities. Taguchi introduces the emphasis on loss function to improve quality of products on the side of customer. The proceeding concept of capability indices is not rational for the measurement of quality if the process mean is not equal to target value. The Taguchi approach is said to be more reasonable than the others in quality evaluation because of his loss function. However, the capability indices $C_{pm}{^+}$ and $C_{pn}$ using Taguchi's loss function only consider, acceptance cost for deviation from target value within specification limits. In other words, they do not include rejection cost for nonconformings that are failed to fall on the specification limits.

  • PDF

The Assessing Comparative Study for Statistical Process Control of Software Reliability Model Based on Rayleigh and Burr Type (Rayleigh형과 Burr형 NHPP 소프트웨어 신뢰모형에 관한 통계적 공정관리 접근방법 비교연구)

  • Kim, Hee Cheul
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.10 no.2
    • /
    • pp.1-11
    • /
    • 2014
  • Software reliability in the software development process is an important issue. Software process improvement helps in finishing with reliable software product. In this field, SPC (Statistical process control) is a method of process management through application of statistical analysis, which involves and includes the defining, measuring, controlling, and improving of the processes. The proposed process involves evaluation of the parameter of the mean value function and hence the values of the mean value function at various inter failure times to develop relevant time control chart. In this paper, was proposed a control mechanism, based on time between failures observations using Rayleigh and Burr distribution property, which is based on Non Homogeneous Poisson Process (NHPP). In this study, the proposed model is reliable in terms of hazard function, because it is more efficient in this area can be used as an alternative to the existing model. Through this study, software developers are considered by the various intended functions, prior knowledge of the software to identify failure modes to feed to some extent shall be able to help.

A Study on Test Coverage for Software Reliability Evaluation (소프트웨어 신뢰도 평가를 위한 테스트 적용범위에 대한 연구)

  • Park, Jung-Yang;Park, Jae-Heung;Park, Su-Jin
    • The KIPS Transactions:PartD
    • /
    • v.8D no.4
    • /
    • pp.409-420
    • /
    • 2001
  • Recently a new approach to evaluation of software reliability, one of important attributes of a software system, during testing has been devised. This approach utilizes test coverage information. The coverage-based software reliability growth models recently appeared in the literature are first reviewed and classified into two classes. Inherent problems of each of the two classes are then discussed and their validity is empirically investigated. In addition, a new mean value function in coverage and a heuristic procedure for selecting the best coverage are proposed.

  • PDF

Reliability-based design optimization using reliability mapping functions

  • Zhao, Weitao;Shi, Xueyan;Tang, Kai
    • Structural Engineering and Mechanics
    • /
    • v.62 no.2
    • /
    • pp.125-138
    • /
    • 2017
  • Reliability-based design optimization (RBDO) is a powerful tool for design optimization when considering probabilistic characteristics of design variables. However, it is often computationally intensive because of the coupling of reliability analysis and cost minimization. In this study, the concept of reliability mapping function is defined based on the relationship between the reliability index obtained by using the mean value first order reliability method and the failure probability obtained by using an improved response surface method. Double-loop involved in the classical RBDO can be converted into single-loop by using the reliability mapping function. Since the computational effort of the mean value first order reliability method is minimal, RBDO by using reliability mapping functions should be highly efficient. Engineering examples are given to demonstrate the efficiency and accuracy of the proposed method. Numerical results indicated that the proposed method has the similar accuracy as Monte Carlo simulation, and it can obviously reduce the computational effort.

Determination of the Wear Limit to the Process Mean Shift Problem with Varying Product and Process Variance (생산량과 공정분산이 변하는 공정평균이동 문제의 마모한계 결정)

  • Lee, Do-Kyung
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.43 no.3
    • /
    • pp.95-100
    • /
    • 2020
  • Machines and facilities are physically or chemically degenerated by continuous usage. One of the results of this degeneration is the process mean shift. The representative type of the degeneration is wear of tool or machine. According to the increasing wear level, non-conforming products cost and quality loss cost are increasing simultaneously. Therefore a periodic preventive resetting the process is necessary. The total cost consists of three items: adjustment cost (or replacement cost), non-conforming cost due to product out of upper or lower limit specification, and quality loss cost due to difference from the process target value and the product characteristic value among the conforming products. In this case, the problem of determining the adjustment period or wear limit that minimizes the total cost is called the 'process mean shift' problem. It is assumed that both specifications are set and the wear level can be observed directly. In this study, we propose a new model integrating the quality loss cost, process variance, and production volume, which has been conducted in different fields in previous studies. In particular, for the change in production volume according to the increasing in wear level, we propose a generalized production quantity function g(w). This function can be applied to most processes and we fitted the g(w) to the model. The objective equation of this model is the total cost per unit wear, and the determining variables are the wear limit and initial process setting position that minimize the objective equation.

A Modified Error Function to Improve the Error Back-Propagation Algorithm for Multi-Layer Perceptrons

  • Oh, Sang-Hoon;Lee, Young-Jik
    • ETRI Journal
    • /
    • v.17 no.1
    • /
    • pp.11-22
    • /
    • 1995
  • This paper proposes a modified error function to improve the error back-propagation (EBP) algorithm for multi-Layer perceptrons (MLPs) which suffers from slow learning speed. It can also suppress over-specialization for training patterns that occurs in an algorithm based on a cross-entropy cost function which markedly reduces learning time. In the similar way as the cross-entropy function, our new function accelerates the learning speed of the EBP algorithm by allowing the output node of the MLP to generate a strong error signal when the output node is far from the desired value. Moreover, it prevents the overspecialization of learning for training patterns by letting the output node, whose value is close to the desired value, generate a weak error signal. In a simulation study to classify handwritten digits in the CEDAR [1] database, the proposed method attained 100% correct classification for the training patterns after only 50 sweeps of learning, while the original EBP attained only 98.8% after 500 sweeps. Also, our method shows mean-squared error of 0.627 for the test patterns, which is superior to the error 0.667 in the cross-entropy method. These results demonstrate that our new method excels others in learning speed as well as in generalization.

  • PDF

Statistical properties of the maximum elastoplastic story drift of steel frames subjected to earthquake load

  • Li, Gang
    • Steel and Composite Structures
    • /
    • v.3 no.3
    • /
    • pp.185-198
    • /
    • 2003
  • The concept of performance based seismic design has been gradually accepted by the earthquake engineering profession recently, in which the cost-effectiveness criterion is one of the most important principles and more attention is paid to the structural performance at the inelastic stage. Since there are many uncertainties in seismic design, reliability analysis is a major task in performance based seismic design. However, structural reliability analysis may be very costly and time consuming because the limit state function is usually a highly nonlinear implicit function with respect to the basic design variables, especially for the complex large-scale structures for dynamic and nonlinear analysis. Understanding statistical properties of the structural inelastic deformation, which is the aim of the present paper, is helpful to develop an efficient approximate approach of reliability analysis. The present paper studies the statistical properties of the maximum elastoplastic story drift of steel frames subjected to earthquake load. The randomness of earthquake load, dead load, live load, steel elastic modulus, yield strength and structural member dimensions are considered. Possible probability distributions for the maximum story are evaluated using K-S test. The results show that the choice of the probability distribution for the maximum elastoplastic story drift of steel frames is related to the mean value of the maximum elastoplastic story drift. When the mean drift is small (less than 0.3%), an extreme value type I distribution is the best choice. However, for large drifts (more than 0.35%), an extreme value type II distribution is best.

An empirical study on the economies of scale of hospital service in korea (우리나라 병원의 규모의 경제에 관한 연구)

  • 전기홍;조우현;김양균
    • Health Policy and Management
    • /
    • v.4 no.1
    • /
    • pp.107-122
    • /
    • 1994
  • Many alternatives have been discussed to reduce the medical expenditure and to use the medical resources effectively. Many studies about the economies of scale have been done for the last several decades. This study has analyzed the relationship between the number of beds and the mean expense per hospitalization day in Korea. A Cost Function Model was identified and we wanted to see the minimum optimal size with the cheapest mean expense per hospitalization day. The result is as follows; 1. In the Cost Function Mode, (the number of beds)$^{2}$, the number of personnel, productivity and training institutions are the factors that statisticaly influence the mean expenses. 2. By the univariate analysis the mean expense proved to be the smallest as the level of 150-200bed, The breaked down of the components of expenses shows that the mean labor cost is much different from the mean value of material and administration costs, and that hospital with 150-200 beds also have the minimal expense. The mean expense goes up dramatically in hospitals of 450 beds or more. 3. When the other conditions are constant, according to the multiple regression analysis of the mean expense per adjusted hospitalization day the minimum optimal size with the cheapest expense is a hospital with 191 beds and the hospital with 230 beds takes the lowest mean labor cost. The material or administration costs are not influenced by hospital size. This research has limitation in measuring the variables that influence hospital xpenses, in estimating hospital output by the number of beds in considering outpatient cost and in securing representativeness of hospitals because many hospitals made no responses to the research questionnare. But it is valuable and helpful for development of health policy to figure out the number of beds with the cheapest expense per hospitalization day.

  • PDF

Mean Square Response Analysis of the Tall Building to Hazard Fluctuating Wind Loads (재난변동풍하중을 받는 고층건물의 평균자승응해석)

  • Oh, Jong Seop;Hwang, Eui Jin;Ryu, Ji Hyeob
    • Journal of Korean Society of Disaster and Security
    • /
    • v.6 no.3
    • /
    • pp.1-8
    • /
    • 2013
  • Based on random vibration theory, a procedure for calculating the dynamic response of the tall building to time-dependent random excitation is developed. In this paper, the fluctuating along- wind load is assumed as time-dependent random process described by the time-independent random process with deterministic function during a short duration of time. By deterministic function A(t)=1-exp($-{\beta}t$), the absolute value square of oscillatory function is represented from author's studies. The time-dependent random response spectral density is represented by using the absolute value square of oscillatory function and equivalent wind load spectrum of Solari. Especially, dynamic mean square response of the tall building subjected to fluctuating wind loads was derived as analysis function by the Cauchy's Integral Formula and Residue Theorem. As analysis examples, there were compared the numerical integral analytic results with the analysis fun. results by dynamic properties of the tall uilding.

The Study of Cognitive Functional Difference and EEG Spectrum Difference among Sasang Constitutions (사상체질에 따른 뇌파, 학습능력 차이에 관한 연구)

  • Kim, Seok-Hwan;Choi, Kang-Wook;Lee, Sang-Ryong;Jung, In-Chul
    • Journal of Oriental Neuropsychiatry
    • /
    • v.18 no.2
    • /
    • pp.89-100
    • /
    • 2007
  • Objective : The purpose of this study is to examine relationship between cognitive function and sasang constitution by analyzing EEG status of company workers in Cheon-An. Method : 59 company workers were tested with cognitive assessment EEG program and questionaire for the Sasang Constitution Classification II. They were assorted by Sasang Constitutions, and we analyzed its correlation with cognitive assessment score and EEG data. Results : 1. According to mean active EEG rhythm of Alpha. H-Beta, Gamma wave, there were no significant difference among Sasang Constitution. 2. According to mean success, error, concentration, response, workload and left/right brain activity score, there were no significant difference among Sasang Constitution. 3. According to mean active EEG rhythm of Theta, SMR, M-Beta wave, Soyangin(少陽人)'s value was significantly higher than that of Taeumin(太陰人) 4. According to mean cognitive strenghth score, Soyangin(少陽人)'s value was significantly higher than that of Taeumin(太陰人). Conclusion : In conclusion, Sasang Constitutional difference has no relevance with cognitive abilities However, Soyangin(少陽人) showed higher mean active EEG rhythm of Theta, SMR, M-Beta wave than that of Taeumin(太陰人). In addition, Soyangin(少陽人) also showed higher mean cognitive strenghth score than that of Taeumin(太陰人).

  • PDF