• Title/Summary/Keyword: Interval models

Search Result 572, Processing Time 0.027 seconds

Determining Optimal Aggregation Interval Size for Travel Time Estimation and Forecasting with Statistical Models (통행시간 산정 및 예측을 위한 최적 집계시간간격 결정에 관한 연구)

  • Park, Dong-Joo
    • Journal of Korean Society of Transportation
    • /
    • v.18 no.3
    • /
    • pp.55-76
    • /
    • 2000
  • We propose a general solution methodology for identifying the optimal aggregation interval sizes as a function of the traffic dynamics and frequency of observations for four cases : i) link travel time estimation, ii) corridor/route travel time estimation, iii) link travel time forecasting. and iv) corridor/route travel time forecasting. We first develop statistical models which define Mean Square Error (MSE) for four different cases and interpret the models from a traffic flow perspective. The emphasis is on i) the tradeoff between the Precision and bias, 2) the difference between estimation and forecasting, and 3) the implication of the correlation between links on the corridor/route travel time estimation and forecasting, We then demonstrate the Proposed models to the real-world travel time data from Houston, Texas which were collected as Part of the Automatic Vehicle Identification (AVI) system of the Houston Transtar system. The best aggregation interval sizes for the link travel time estimation and forecasting were different and the function of the traffic dynamics. For the best aggregation interval sizes for the corridor/route travel time estimation and forecasting, the covariance between links had an important effect.

  • PDF

Background Subtraction Algorithm Based on Multiple Interval Pixel Sampling (다중 구간 샘플링에 기반한 배경제거 알고리즘)

  • Lee, Dongeun;Choi, Young Kyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.1
    • /
    • pp.27-34
    • /
    • 2013
  • Background subtraction is one of the key techniques for automatic video content analysis, especially in the tasks of visual detection and tracking of moving object. In this paper, we present a new sample-based technique for background extraction that provides background image as well as background model. To handle both high-frequency and low-frequency events at the same time, multiple interval background models are adopted. The main innovation concerns the use of a confidence factor to select the best model from the multiple interval background models. To our knowledge, it is the first time that a confidence factor is used for merging several background models in the field of background extraction. Experimental results revealed that our approach based on multiple interval sampling works well in complicated situations containing various speed moving objects with environmental changes.

Misleading Confidence Interval for Sum of Variances Calculated by PROC MIXED of SAS (PROC MIXED가 제시하는 분산의 합의 신뢰구간의 문제점)

  • 박동준
    • The Korean Journal of Applied Statistics
    • /
    • v.17 no.1
    • /
    • pp.145-151
    • /
    • 2004
  • PROC MIXED fits a variety of mixed models to data and enables one to use these fitted models to make statistical inferences about the data. However, the simulation study in this article shows that PROC MIXED using REML estimators provides one with a confidence interval, that does not keep the stated confidence coefficients, on sums of two variance components in the simple regression model with unbalanced nested error structure which is a mixed model.

Software Reliability Prediction of Grouped Failure Data Using Variant Models of Cascade-Correlation Learning Algorithm (변형된 캐스케이드-상관 학습 알고리즘을 적용한 그룹 고장 데이터의 소프트웨어 신뢰도 예측)

  • Lee, Sang-Un;Park, Jung-Yang
    • The KIPS Transactions:PartD
    • /
    • v.8D no.4
    • /
    • pp.387-392
    • /
    • 2001
  • This Many software projects collect grouped failure data (failures in some failure interval or in variable time interval) rather than individual failure times or failure count data during the testing or operational phase. This paper presents the neural network (NN) modeling for grouped failure data that is able to predict cumulative failures in the variable future time. The two variant models of cascade-correlation learning (CasCor) algorithm are presented. Suggested models are compared with other well-known NN models and statistical software reliability growth models (SRGMs). Experimental results show that the suggested models show better predictability.

  • PDF

Decision-making Method of Optimum Inspection Interval for Plant Maintenance by Genetic Algorithms (유전 알고리즘에 의한 플랜트 보전을 위한 최적검사기간 결정 방법론)

  • 서광규;서지한
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.26 no.2
    • /
    • pp.1-8
    • /
    • 2003
  • The operation and management of a plant require proper accounting for the constraints coming from reliability requirements as well as from budget and resource considerations. Most of the mathematical methods to decide the inspection time interval for plant maintenance by reliability theory are too complicated to be solved. Moreover, the mathematical and theoretical models are not usually cases in the practical applications. In order to overcome these problems, we propose a new the decision-making method of optimal inspection interval to minimize the maintenance cost by reliability theory and genetic algorithm (GA). The most merit of the proposed method is to decide the inspection interval for a plant machine of which failure rate $\lambda$(t) conforms to any probability distribution. Therefore, this method is more practical. The efficiency of the proposed method is verified by comparing the results obtained by GA-based method with the inspection model haying regular time interval.

Quadratic Loss Support Vector Interval Regression Machine for Crisp Input-Output Data

  • Hwang, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.15 no.2
    • /
    • pp.449-455
    • /
    • 2004
  • Support vector machine (SVM) has been very successful in pattern recognition and function estimation problems for crisp data. This paper proposes a new method to evaluate interval regression models for crisp input-output data. The proposed method is based on quadratic loss SVM, which implements quadratic programming approach giving more diverse spread coefficients than a linear programming one. The proposed algorithm here is model-free method in the sense that we do not have to assume the underlying model function. Experimental result is then presented which indicate the performance of this algorithm.

  • PDF

Forecasting interval for the INAR(p) process using sieve bootstrap

  • Kim, Hee-Young;Park, You-Sung
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2005.11a
    • /
    • pp.159-165
    • /
    • 2005
  • Recently, as a result of the growing interest in modelling stationary processes with discrete marginal distributions, several models for integer valued time series have been proposed in the literature. One of theses models is the integer-valued autoregressive(INAR) models. However, when modelling with integer-valued autoregressive processes, there is not yet distributional properties of forecasts, since INAR process contain an accrued level of complexity in using the Steutal and Van Harn(1979) thinning operator 'o'. In this study, a manageable expression for the asymptotic mean square error of predicting more than one-step ahead from an estimated poisson INAR(1) model is derived. And, we present a bootstrap methods developed for the calculation of forecast interval limits of INAR(p) model. Extensive finite sample Monte Carlo experiments are carried out to compare the performance of the several bootstrap procedures.

  • PDF

Reliability Computation of Neuro-Fuzzy Models : A Comparative Study (뉴로-퍼지 모델의 신뢰도 계산 : 비교 연구)

  • 심현정;박래정;왕보현
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.4
    • /
    • pp.293-301
    • /
    • 2001
  • This paper reviews three methods to compute a pointwise confidence interval of neuro-fuzzy models and compares their estimation perfonnanee through simulations. The eOITl.putation methods under consideration include stacked generalization using cross-validation, predictive error bar in regressive models, and local reliability measure for the networks employing a local representation scheme. These methods implemented on the neuro-fuzzy models are applied to the problems of simple function approximation and chaotic time series prediction. The results of reliability estimation are compared both quantitatively and qualitatively.

  • PDF

Cure rate proportional odds models with spatial frailties for interval-censored data

  • Yiqi, Bao;Cancho, Vicente Garibay;Louzada, Francisco;Suzuki, Adriano Kamimura
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.6
    • /
    • pp.605-625
    • /
    • 2017
  • This paper presents proportional odds cure models to allow spatial correlations by including spatial frailty in the interval censored data setting. Parametric cure rate models with independent and dependent spatial frailties are proposed and compared. Our approach enables different underlying activation mechanisms that lead to the event of interest; in addition, the number of competing causes which may be responsible for the occurrence of the event of interest follows a Geometric distribution. Markov chain Monte Carlo method is used in a Bayesian framework for inferential purposes. For model comparison some Bayesian criteria were used. An influence diagnostic analysis was conducted to detect possible influential or extreme observations that may cause distortions on the results of the analysis. Finally, the proposed models are applied for the analysis of a real data set on smoking cessation. The results of the application show that the parametric cure model with frailties under the first activation scheme has better findings.

Derivation of Optimal Design Flood by Gamma and Generalized Gamma Distribution Models(I) - On the Gamma Distribution Models - (Gamma 및 Generalized Gamma 분포 모형에 의한 적정 설계홍수량의 유도 (I) -Gamma 분포 모형을 중심으로-)

  • 이순혁;박명근;정연수;맹승진;류경식
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.39 no.3
    • /
    • pp.83-95
    • /
    • 1997
  • This study was conducted to derive optimal design floods by Gamma distribution models of the annual maximum series at eight watersheds along Geum , Yeong San and Seom Jin river Systems, Design floods obtained by different methods for evaluation of parameters and for plotting positions in the Gamma distribution models were compared by the relative mean errors and graphical fit along with 95% confidence interval plotted on Gamma probability paper. The results were analyzed and summarized as follows. 1.Adequacy for the analysis of flood flow data used in this study was confirmed by the tests of Independence, Homogeneity and detection of Outliers. 2.Basic statistics and parameters were calculated by Gamma distribution models using Methods of Moments and Maximum Likelihood. 3.It was found that design floods derived by the method of maximum likelihood and Hazen plotting position formular of two parameter Gamma distribution are much closer to those of the observed data in comparison with those obtained by other methods for parameters and for plotting positions from the viewpoint of relative mean errors. 4.Reliability of derived design floods by both maximum likelihood and method of moments with two parameter Gamma distribution was acknowledged within 95% confidence interval.

  • PDF