• 제목/요약/키워드: Statistical Methodology

검색결과 1,299건 처리시간 0.024초

Detection of Change-Points by Local Linear Regression Fit;

  • Kim, Jong Tae;Choi, Hyemi;Huh, Jib
    • Communications for Statistical Applications and Methods
    • /
    • 제10권1호
    • /
    • pp.31-38
    • /
    • 2003
  • A simple method is proposed to detect the number of change points and test the location and size of multiple change points with jump discontinuities in an otherwise smooth regression model. The proposed estimators are based on a local linear regression fit by the comparison of left and right one-side kernel smoother. Our proposed methodology is explained and applied to real data and simulated data.

Modeling Extreme Values of Ground-Level Ozone Based on Threshold Methods for Markov Chains

  • Seokhoon Yun
    • Communications for Statistical Applications and Methods
    • /
    • 제3권2호
    • /
    • pp.249-273
    • /
    • 1996
  • This paper reviews and develops several statistical models for extreme values, based on threshold methodology. Extreme values of a time series are modeled in terms of tails which are defined as truncated forms of original variables, and Markov property is imposed on the tails. Tails of the generalized extreme value distribution and a multivariate extreme value distributively, of the tails of the series. These models are then applied to real ozone data series collected in the Chicago area. A major concern is given to detecting any possible trend in the extreme values.

  • PDF

STATISTICAL EVIDENCE METHODOLOGY FOR MODEL ACCEPTANCE BASED ON RECORD VALUES

  • Doostparast M.;Emadi M.
    • Journal of the Korean Statistical Society
    • /
    • 제35권2호
    • /
    • pp.167-177
    • /
    • 2006
  • An important role of statistical analysis in science is interpreting observed data as evidence, that is 'what do the data say?'. Although standard statistical methods (hypothesis testing, estimation, confidence intervals) are routinely used for this purpose, the theory behind those methods contains no defined concept of evidence and no answer to the basic question 'when is it correct to say that a given body of data represent evidence supporting one statistical hypothesis against another?' (Royall, 1997). In this article, we use likelihood ratios to measure evidence provided by record values in favor of a hypothesis and against an alternative. This hypothesis is concerned on mean of an exponential model and prediction of future record values.

우주항공 재료시스템 품질인증 (Certification Methodology of Aerospace Materials System)

  • 이호성
    • 항공우주시스템공학회지
    • /
    • 제1권2호
    • /
    • pp.13-20
    • /
    • 2007
  • Structural qualification plan (SQP) for aerospace vehicle is based on material certification methodology, which must be approved by certification authority. It is internationally required to use of statistically based material allowables to design aerospace vehicles with aerospace materials. In order to comply with this regulation, it is necessary to establish relatively large amount of database, which increases test costs and time. Recently NASA/FAA develop the new methodology which results in cost, time, and risk reduction, and satisfies the regulation at the same time. This paper summarizes the certification methodology of materials system as a part of structural qualification plan (SQP) of aerospace vehicles and also thermal management of the vehicle system, like thermal protection materials system and thermally conductive material system. Materials design allowable was determined using this method for a carbon/epoxy composite material.

  • PDF

공분산구조분석을 이용한 자체충족률 모형 검증 (Formulating Regional Relevance Index through Covariance Structure Modeling)

  • 장혜정;김창엽
    • 보건행정학회지
    • /
    • 제11권2호
    • /
    • pp.123-140
    • /
    • 2001
  • Hypotheses In health services research are becoming increasingly more complex and specific. As a result, health services research studies often include multiple independent, intervening, and dependent variables in a single hypothesis. Nevertheless, the statistical models adopted by health services researchers have failed to keep pace with the increasing complexity and specificity of hypotheses and research designs. This article introduces a statistical model well suited for complex and specific hypotheses tests in health services research studies. The covariance structure modeling(CSM) methodology is especially applied to regional relevance indices(RIs) to assess the impact of health resources and healthcare utilization. Data on secondary statistics and health insurance claims were collected by each catchment area. The model for RI was justified by direct and indirect effects of three latent variables measured by seven observed variables, using ten structural equations. The resulting structural model revealed significant direct effects of the structure of health resources but indirect effects of the quantity on RIs, and explained 82% of correlation matrix of measurement variables. Two variables, the number of beds and the portion of specialists among medical doctors, became to have significant effects on RIs by being analyzed using the CSM methodology, while they were insignificant in the regression model. Recommendations for the CSM methodology on health service research data are provided.

  • PDF

Parametric survival model based on the Lévy distribution

  • Valencia-Orozco, Andrea;Tovar-Cuevas, Jose R.
    • Communications for Statistical Applications and Methods
    • /
    • 제26권5호
    • /
    • pp.445-461
    • /
    • 2019
  • It is possible that data are not always fitted with sufficient precision by the existing distributions; therefore this article presents a methodology that enables the use of families of asymmetric distributions as alternative probabilistic models for survival analysis, with censorship on the right, different from those usually studied (the Exponential, Gamma, Weibull, and Lognormal distributions). We use a more flexible parametric model in terms of density behavior, assuming that data can be fit by a distribution of stable distribution families considered unconventional in the analyses of survival data that are appropriate when extreme values occur, with small probabilities that should not be ignored. In the methodology, the determination of the analytical expression of the risk function h(t) of the $L{\acute{e}}vy$ distribution is included, as it is not usually reported in the literature. A simulation was conducted to evaluate the performance of the candidate distribution when modeling survival times, including the estimation of parameters via the maximum likelihood method, survival function ${\hat{S}}$(t) and Kaplan-Meier estimator. The obtained estimates did not exhibit significant changes for different sample sizes and censorship fractions in the sample. To illustrate the usefulness of the proposed methodology, an application with real data, regarding the survival times of patients with colon cancer, was considered.

Comparative analysis of model performance for predicting the customer of cafeteria using unstructured data

  • Seungsik Kim;Nami Gu;Jeongin Moon;Keunwook Kim;Yeongeun Hwang;Kyeongjun Lee
    • Communications for Statistical Applications and Methods
    • /
    • 제30권5호
    • /
    • pp.485-499
    • /
    • 2023
  • This study aimed to predict the number of meals served in a group cafeteria using machine learning methodology. Features of the menu were created through the Word2Vec methodology and clustering, and a stacking ensemble model was constructed using Random Forest, Gradient Boosting, and CatBoost as sub-models. Results showed that CatBoost had the best performance with the ensemble model showing an 8% improvement in performance. The study also found that the date variable had the greatest influence on the number of diners in a cafeteria, followed by menu characteristics and other variables. The implications of the study include the potential for machine learning methodology to improve predictive performance and reduce food waste, as well as the removal of subjective elements in menu classification. Limitations of the research include limited data cases and a weak model structure when new menus or foreign words are not included in the learning data. Future studies should aim to address these limitations.

불량탄 안전사고 예방을 위한 탄약 수명 예측 연구 리뷰 (A Review on Ammunition Shelf-life Prediction Research for Preventing Accidents Caused by Defective Ammunition)

  • 정영진;홍지수;김솔잎;강성우
    • 대한안전경영과학회지
    • /
    • 제26권1호
    • /
    • pp.39-44
    • /
    • 2024
  • In order to prevent accidents via defective ammunition, this paper analyzes recent research on ammunition life prediction methodology. This workanalyzes current shelf-life prediction approaches by comparing the pros and cons of physical modeling, accelerated testing, and statistical analysis-based prediction techniques. Physical modeling-based prediction demonstrates its usefulness in understanding the physical properties and interactions of ammunition. Accelerated testing-based prediction is useful in quickly verifying the reliability and safety of ammunition. Additionally, statistical analysis-based prediction is emphasized for its ability to make decisions based on data. This paper aims to contribute to the early detection of defective ammunition by analyzing ammunition life prediction methodology hereby reducing defective ammunition accidents. In order to prepare not only Korean domestic war situation but also the international affairs from Eastern Europe and Mid East countries, it is very important to enhance the stability of organizations using ammunition and reduce costs of potential accidents.

Conditional Density based Statistical Prediction

  • J Rama Devi;K. Koteswara Rao;M Venkateswara Rao
    • International Journal of Computer Science & Network Security
    • /
    • 제23권6호
    • /
    • pp.127-139
    • /
    • 2023
  • Numerous genuine issues, for example, financial exchange expectation, climate determining and so forth has inalienable arbitrariness related with them. Receiving a probabilistic system for forecast can oblige this dubious connection among past and future. Commonly the interest is in the contingent likelihood thickness of the arbitrary variable included. One methodology for expectation is with time arrangement and auto relapse models. In this work, liner expectation technique and approach for computation of forecast coefficient are given and likelihood of blunder for various assessors is determined. The current methods all need in some regard assessing a boundary of some accepted arrangement. In this way, an elective methodology is proposed. The elective methodology is to gauge the restrictive thickness of the irregular variable included. The methodology proposed in this theory includes assessing the (discretized) restrictive thickness utilizing a Markovian definition when two arbitrary factors are genuinely needy, knowing the estimation of one of them allows us to improve gauge of the estimation of the other one. The restrictive thickness is assessed as the proportion of the two dimensional joint thickness to the one-dimensional thickness of irregular variable at whatever point the later is positive. Markov models are utilized in the issues of settling on an arrangement of choices and issue that have an innate transience that comprises of an interaction that unfurls on schedule on schedule. In the nonstop time Markov chain models the time stretches between two successive changes may likewise be a ceaseless irregular variable. The Markovian methodology is especially basic and quick for practically all classes of classes of issues requiring the assessment of contingent densities.

Bayesian Hypothesis Testing in Multivariate Growth Curve Model.

  • Kim, Hea-Jung;Lee, Seung-Joo
    • Journal of the Korean Statistical Society
    • /
    • 제25권1호
    • /
    • pp.81-94
    • /
    • 1996
  • This paper suggests a new criterion for testing the general linear hypothesis about coefficients in multivariate growth curve model. It is developed from a Bayesian point of view using the highest posterior density region methodology. Likelihood ratio test criterion(LRTC) by Khatri(1966) results as an approximate special case. It is shown that under the simple case of vague prior distribution for the multivariate normal parameters a LRTC-like criterion results; but the degrees of freedom are lower, so the suggested test criterion yields more conservative test than is warranted by the classical LRTC, a result analogous to that of Berger and Sellke(1987). Moreover, more general(non-vague) prior distributions will generate a richer class of tests than were previously available.

  • PDF