• Title/Summary/Keyword: parameter sets

Search Result 335, Processing Time 0.03 seconds

Length-biased Rayleigh distribution: reliability analysis, estimation of the parameter, and applications

  • Kayid, M.;Alshingiti, Arwa M.;Aldossary, H.
    • International Journal of Reliability and Applications
    • /
    • v.14 no.1
    • /
    • pp.27-39
    • /
    • 2013
  • In this article, a new model based on the Rayleigh distribution is introduced. This model is useful and practical in physics, reliability, and life testing. The statistical and reliability properties of this model are presented, including moments, the hazard rate, the reversed hazard rate, and mean residual life functions, among others. In addition, it is shown that the distributions of the new model are ordered regarding the strongest likelihood ratio ordering. Four estimating methods, namely, method of moment, maximum likelihood method, Bayes estimation, and uniformly minimum variance unbiased, are used to estimate the parameters of this model. Simulation is used to calculate the estimates and to study their properties. Finally, the appropriateness of this model for real data sets is shown by using the chi-square goodness of fit test and the Kolmogorov-Smirnov statistic.

  • PDF

A Comparison of the Reliability Estimation Accuracy between Bayesian Methods and Classical Methods Based on Weibull Distribution (와이블분포 하에서 베이지안 기법과 전통적 기법 간의 신뢰도 추정 정확도 비교)

  • Cho, HyungJun;Lim, JunHyoung;Kim, YongSoo
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.42 no.4
    • /
    • pp.256-262
    • /
    • 2016
  • The Weibull is widely used in reliability analysis, and several studies have attempted to improve estimation of the distribution's parameters. least squares estimation (LSE) or Maximum likelihood estimation (MLE) are often used to estimate distribution parameters. However, it has been proven that Bayesian methods are more suitable for small sample sizes than LSE and MLE. In this work, the Weibull parameter estimation accuracy of LSE, MLE, and Bayesian method are compared for sample sets with 3 to 30 data points. The Bayesian method was most accurate for sample sizes under 25, and the accuracy of the Bayesian method was similar to LSE and MLE as the sample size increased.

Data Mining Approach for Real-Time Processing of Large Data Using Case-Based Reasoning : High-Risk Group Detection Data Warehouse for Patients with High Blood Pressure (사례기반추론을 이용한 대용량 데이터의 실시간 처리 방법론 : 고혈압 고위험군 관리를 위한 자기학습 시스템 프레임워크)

  • Park, Sung-Hyuk;Yang, Kun-Woo
    • Journal of Information Technology Services
    • /
    • v.10 no.1
    • /
    • pp.135-149
    • /
    • 2011
  • In this paper, we propose the high-risk group detection model for patients with high blood pressure using case-based reasoning. The proposed model can be applied for public health maintenance organizations to effectively manage knowledge related to high blood pressure and efficiently allocate limited health care resources. Especially, the focus is on the development of the model that can handle constraints such as managing large volume of data, enabling the automatic learning to adapt to external environmental changes and operating the system on a real-time basis. Using real data collected from local public health centers, the optimal high-risk group detection model was derived incorporating optimal parameter sets. The results of the performance test for the model using test data show that the prediction accuracy of the proposed model is two times better than the natural risk of high blood pressure.

Comparative Study on the Optimization Methods for a Motor Drive of Artificial Hearts

  • Pohlmann, Andre;LeBmann, Marc;Hameyer, Kay
    • Journal of Electrical Engineering and Technology
    • /
    • v.7 no.2
    • /
    • pp.193-199
    • /
    • 2012
  • Worldwide cardiovascular diseases are the major cause of death. Aside from heart transplants, which are limited due to the availability of human donor hearts, artificial hearts are the only therapy available for terminal heart diseases. For various reasons, a total implantable artificial heart is desirable. But the limited space in the human thorax sets rigorous restrictions on the weight and dimensions of the device. Nevertheless, the appropriate functionality of the artificial heart must be ensured and blood damage must be prevented. These requirements set further restrictions to the drive of this device. In the this paper, two optimization methods, namely, the manual parameter variation and Differential Evolution algorithm, are presented and applied to match the specifications of an artificial heart.

Modal Parameter Extraction Using a Digital Camera (카메라를 이용한 구조물의 동특성 추출)

  • Kim, Byeong-Hwa
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.18 no.12
    • /
    • pp.1229-1236
    • /
    • 2008
  • A set of modal parameters of a stay-cable have been extracted fi:on a moving picture captured by a digital camera supported by shaking hands. It is hard to identify the center of targets attached on the cable surface from the blurred cable motion image, because of the high speed motion of cable, low sampling frequency of camera, and the shaking effect of camera. This study proposes a multi-template matching algorithm to resolve such difficulties. In addition, a sensitivity-based system identification algorithm is introduced to extract the natural frequencies and damping ratios from the ambient cable vibration data. Three sets of vibration tests are conducted to examine the validity of the proposed algorithms. The results show that the proposed technique is pretty feasible for extracting modal parameters from the severely shaking motion pictures.

Dual Detection-Guided Newborn Target Intensity Based on Probability Hypothesis Density for Multiple Target Tracking

  • Gao, Li;Ma, Yongjie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.10
    • /
    • pp.5095-5111
    • /
    • 2016
  • The Probability Hypothesis Density (PHD) filter is a suboptimal approximation and tractable alternative to the multi-target Bayesian filter based on random finite sets. However, the PHD filter fails to track newborn targets when the target birth intensity is unknown prior to tracking. In this paper, a dual detection-guided newborn target intensity PHD algorithm is developed to solve the problem, where two schemes, namely, a newborn target intensity estimation scheme and improved measurement-driven scheme, are proposed. First, the newborn target intensity estimation scheme, consisting of the Dirichlet distribution with the negative exponent parameter and target velocity feature, is used to recursively estimate the target birth intensity. Then, an improved measurement-driven scheme is introduced to reduce the errors of the estimated number of targets and computational load. Simulation results demonstrate that the proposed algorithm can achieve good performance in terms of target states, target number and computational load when the newborn target intensity is not predefined in multi-target tracking systems.

Efficient distributed estimation based on non-regular quantized data

  • Kim, Yoon Hak
    • Journal of IKEEE
    • /
    • v.23 no.2
    • /
    • pp.710-715
    • /
    • 2019
  • We consider parameter estimation in distributed systems in which measurements at local nodes are quantized in a non-regular manner, where multiple codewords are mapped into a single local measurement. For the system with non-regular quantization, to ensure a perfect independent encoding at local nodes, a local measurement can be encoded into a set of a great number of codewords which are transmitted to a fusion node where estimation is conducted with enormous computational cost due to the large cardinality of the sets. In this paper, we propose an efficient estimation technique that can handle the non-regular quantized data by efficiently finding the feasible combination of codewords without searching all of the possible combinations. We conduct experiments to show that the proposed estimation performs well with respect to previous novel techniques with a reasonable complexity.

Mining Highly Reliable Dense Subgraphs from Uncertain Graphs

  • LU, Yihong;HUANG, Ruizhi;HUANG, Decai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.6
    • /
    • pp.2986-2999
    • /
    • 2019
  • The uncertainties of the uncertain graph make the traditional definition and algorithms on mining dense graph for certain graph not applicable. The subgraph obtained by maximizing expected density from an uncertain graph always has many low edge-probability data, which makes it low reliable and low expected edge density. Based on the concept of ${\beta}$-subgraph, to overcome the low reliability of the densest subgraph, the concept of optimal ${\beta}$-subgraph is proposed. An efficient greedy algorithm is also developed to find the optimal ${\beta}$-subgraph. Simulation experiments of multiple sets of datasets show that the average edge-possibility of optimal ${\beta}$-subgraph is improved by nearly 40%, and the expected edge density reaches 0.9 on average. The parameter ${\beta}$ is scalable and applicable to multiple scenarios.

DEVELOPMENT OF A RESOURCE LEVELING MODEL USING OPTIMIZATION

  • Jin-Lee Kim;Ralph D. Ellis
    • International conference on construction engineering and project management
    • /
    • 2005.10a
    • /
    • pp.558-563
    • /
    • 2005
  • This paper presents a GA-based optimal algorithm for a resource leveling model that levels the resources of a set of non-critical activities experiencing conflicts simultaneously up to an assumed level of resource rates specified by the planner using a pair-wise comparison of the activities being considered. A parameter called the future float is adopted and applied as an indicator for assigning leveling priorities to the sets of activities experiencing conflicts. A construction project network example was worked out to demonstrate the performance of the proposed method. The histogram obtained using the algorithm proposed was shown to be the same as, or very close to that produced by the existing resource leveling method based on the least total float rule, which shifts non-critical activities individually.

  • PDF

Locating-Hop Domination in Graphs

  • Canoy, Sergio R. Jr.;Salasalan, Gemma P.
    • Kyungpook Mathematical Journal
    • /
    • v.62 no.1
    • /
    • pp.193-204
    • /
    • 2022
  • A subset S of V(G), where G is a simple undirected graph, is a hop dominating set if for each v ∈ V(G)\S, there exists w ∈ S such that dG(v, w) = 2 and it is a locating-hop set if NG(v, 2) ∩ S ≠ NG(v, 2) ∩ S for any two distinct vertices u, v ∈ V(G)\S. A set S ⊆ V(G) is a locating-hop dominating set if it is both a locating-hop and a hop dominating set of G. The minimum cardinality of a locating-hop dominating set of G, denoted by 𝛄lh(G), is called the locating-hop domination number of G. In this paper, we investigate some properties of this newly defined parameter. In particular, we characterize the locating-hop dominating sets in graphs under some binary operations.