• Title/Summary/Keyword: Data Interval

Search Result 3,370, Processing Time 0.03 seconds

Probabilistic assessment on the basis of interval data

  • Thacker, Ben H.;Huyse, Luc J.
    • Structural Engineering and Mechanics
    • /
    • v.25 no.3
    • /
    • pp.331-345
    • /
    • 2007
  • Uncertainties enter a complex analysis from a variety of sources: variability, lack of data, human errors, model simplification and lack of understanding of the underlying physics. However, for many important engineering applications insufficient data are available to justify the choice of a particular probability density function (PDF). Sometimes the only data available are in the form of interval estimates which represent, often conflicting, expert opinion. In this paper we demonstrate that Bayesian estimation techniques can successfully be used in applications where only vague interval measurements are available. The proposed approach is intended to fit within a probabilistic framework, which is established and widely accepted. To circumvent the problem of selecting a specific PDF when only little or vague data are available, a hierarchical model of a continuous family of PDF's is used. The classical Bayesian estimation methods are expanded to make use of imprecise interval data. Each of the expert opinions (interval data) are interpreted as random interval samples of a parent PDF. Consequently, a partial conflict between experts is automatically accounted for through the likelihood function.

Estimation in the exponential distribution under progressive Type I interval censoring with semi-missing data

  • Shin, Hyejung;Lee, Kwangho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.6
    • /
    • pp.1271-1277
    • /
    • 2012
  • In this paper, we propose an estimation method of the parameter in an exponential distribution based on a progressive Type I interval censored sample with semi-missing observation. The maximum likelihood estimator (MLE) of the parameter in the exponential distribution cannot be obtained explicitly because the intervals are not equal in length under the progressive Type I interval censored sample with semi-missing data. To obtain the MLE of the parameter for the sampling scheme, we propose a method by which progressive Type I interval censored sample with semi-missing data is converted to the progressive Type II interval censored sample. Consequently, the estimation procedures in the progressive Type II interval censored sample can be applied and we obtain the MLE of the parameter and survival function. It will be shown that the obtained estimators have good performance in terms of the mean square error (MSE) and mean integrated square error (MISE).

On the Efficient Teaching Method of Confidence Interval in College Education

  • Kim, Yeung-Hoon;Ko, Jeong-Hwan
    • Journal of the Korean Data and Information Science Society
    • /
    • v.19 no.4
    • /
    • pp.1281-1288
    • /
    • 2008
  • The purpose of this study is to consider the efficient methods for introducing the confidence interval. We explain various concepts and approaches about the confidence interval estimation. Computing methods for calculating the efficient confidence interval are suggested.

  • PDF

A Note on Interval Approximation of a Fuzzy Number

  • Hong, Dug-Hun;Kim, Kyung-Tae
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.3
    • /
    • pp.913-918
    • /
    • 2006
  • Chanas(2001) introduced the notion of interval approximation of a fuzzy number with the condition that the width of this interval is equal to the width of the expected interval. In this note, this condition is relaxed and the resulting formulae are derived for determining the approximation interval. This interval is compared with the expected interval and approximation interval of a fuzzy number as introduced by Chanas.

  • PDF

Investigations into Coarsening Continuous Variables

  • Jeong, Dong-Myeong;Kim, Jay-J.
    • The Korean Journal of Applied Statistics
    • /
    • v.23 no.2
    • /
    • pp.325-333
    • /
    • 2010
  • Protection against disclosure of survey respondents' identifiable and/or sensitive information is a prerequisite for statistical agencies that release microdata files from their sample surveys. Coarsening is one of popular methods for protecting the confidentiality of the data. Grouped data can be released in the form of microdata or tabular data. Instead of releasing the data in a tabular form only, having microdata available to the public with interval codes with their representative values greatly enhances the utility of the data. It allows the researchers to compute covariance between the variables and build statistical models or to run a variety of statistical tests on the data. It may be conjectured that the variance of the interval data is lower that of the ungrouped data in the sense that the coarsened data do not have the within interval variance. This conjecture will be investigated using the uniform and triangular distributions. Traditionally, midpoint is used to represent all the values in an interval. This approach implicitly assumes that the data is uniformly distributed within each interval. However, this assumption may not hold, especially in the last interval of the economic data. In this paper, we will use three distributional assumptions - uniform, Pareto and lognormal distribution - in the last interval and use either midpoint or median for other intervals for wage and food costs of the Statistics Korea's 2006 Household Income and Expenditure Survey(HIES) data and compare these approaches in terms of the first two moments.

Method of Recurrence Interval Estimation for Fault Activity from Age Dating Data (연대측정자료를 이용한 단층활동주기 산정 방법)

  • 최원학
    • Proceedings of the Earthquake Engineering Society of Korea Conference
    • /
    • 2001.04a
    • /
    • pp.74-80
    • /
    • 2001
  • The estimation of recurrence interval for fault activity and earthquake is an important input parameter for seismic hazard assessment. In this study, the methods of recurrences interval estimation were reviewed and tentative calculation was performed for age dating data which have uncertainty. Age dating data come from previous studies of Ulsan fault system which is a well developed lineament in the southeastern part of korean Peninsula. Age dating for fault gouges, parent rocks, Quaternary sediments and veins were carried out by several researchers through various methods. Recurrence interval for fault activity was estimated on the basis of the age dating data of minor fault gouge and sediments during past 3Ma. The estimated recurrence interval was about 430-500 ka. Exact estimation of recurrence interval for fault activity need to compile more geological data and fault characteristics such as fault length, amount of displacement, slip rate and accurate fault movement age. In the future, the methods and results of fault recurrence interval estimation should be considered for establishing the criteria for domestic active fault definition.

  • PDF

Regression analysis of interval censored competing risk data using a pseudo-value approach

  • Kim, Sooyeon;Kim, Yang-Jin
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.6
    • /
    • pp.555-562
    • /
    • 2016
  • Interval censored data often occur in an observational study where the subject is followed periodically. Instead of observing an exact failure time, two inspection times that include it are available. There are several methods to analyze interval censored failure time data (Sun, 2006). However, in the presence of competing risks, few methods have been suggested to estimate covariate effect on interval censored competing risk data. A sub-distribution hazard model is a commonly used regression model because it has one-to-one correspondence with a cumulative incidence function. Alternatively, Klein and Andersen (2005) proposed a pseudo-value approach that directly uses the cumulative incidence function. In this paper, we consider an extension of the pseudo-value approach into the interval censored data to estimate regression coefficients. The pseudo-values generated from the estimated cumulative incidence function then become response variables in a generalized estimating equation. Simulation studies show that the suggested method performs well in several situations and an HIV-AIDS cohort study is analyzed as a real data example.

Local linear regression analysis for interval-valued data

  • Jang, Jungteak;Kang, Kee-Hoon
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.3
    • /
    • pp.365-376
    • /
    • 2020
  • Interval-valued data, a type of symbolic data, is given as an interval in which the observation object is not a single value. It can also occur frequently in the process of aggregating large databases into a form that is easy to manage. Various regression methods for interval-valued data have been proposed relatively recently. In this paper, we introduce a nonparametric regression model using the kernel function and a nonlinear regression model for the interval-valued data. We also propose applying the local linear regression model, one of the nonparametric methods, to the interval-valued data. Simulations based on several distributions of the center point and the range are conducted using each of the methods presented in this paper. Various conditions confirm that the performance of the proposed local linear estimator is better than the others.

Non-stochastic interval arithmetic-based finite element analysis for structural uncertainty response estimate

  • Lee, Dongkyu;Park, Sungsoo;Shin, Soomi
    • Structural Engineering and Mechanics
    • /
    • v.29 no.5
    • /
    • pp.469-488
    • /
    • 2008
  • Finite element methods have often been used for structural analyses of various mechanical problems. When finite element analyses are utilized to resolve mechanical systems, numerical uncertainties in the initial data such as structural parameters and loading conditions may result in uncertainties in the structural responses. Therefore the initial data have to be as accurate as possible in order to obtain reliable structural analysis results. The typical finite element method may not properly represent discrete systems when using uncertain data, since all input data of material properties and applied loads are defined by nominal values. An interval finite element analysis, which uses the interval arithmetic as introduced by Moore (1966) is proposed as a non-stochastic method in this study and serves a new numerical tool for evaluating the uncertainties of the initial data in structural analyses. According to this method, the element stiffness matrix includes interval terms of the lower and upper bounds of the structural parameters, and interval change functions are devised. Numerical uncertainties in the initial data are described as a tolerance error and tree graphs of uncertain data are constructed by numerical uncertainty combinations of each parameter. The structural responses calculated by all uncertainty cases can be easily estimated so that structural safety can be included in the design. Numerical applications of truss and frame structures demonstrate the efficiency of the present method with respect to numerical analyses of structural uncertainties.

A Study on Evaluation Method of Fatigue Strength Data Using Likelihood Interval Estimation Method (우도구간 추정법에 의한 피로강도 데이터 평가법에 관한 연구)

  • 최창섭
    • Journal of the Korean Society of Safety
    • /
    • v.10 no.2
    • /
    • pp.10-16
    • /
    • 1995
  • In estimating the fatigue data, only the uniform safety rate has been applied so far However, since more reasonable design concepts such as machine structures or subsidiary materials will be required in the future, the importance of a statistical estimation method for fatigue data is being highlighted. With such basic conception in mind, this study was aimed at critically discussing the interval estimation method which has been applied using the classical statistics thus far It was conceived that this conventional method would result in the estimation of the unstable side from the viewpoint of the likelihood Interval estimation method. In this regard, this study aimed at estimating the fatigue strength through the likelihood interval estimation method comparing it with the conventional interval estimation method would result in the estimation of the unstable side from the viewpoint of the likelihood interval estimation method. One of the methods using the likelihood for estimation data is the Bayes method. Based on this theory, statistical estimations were positivly applied, and thereupon, the fatigue data were estimated.

  • PDF