• 제목/요약/키워드: statistics based method

검색결과 2,157건 처리시간 0.026초

Vision-based Predictive Model on Particulates via Deep Learning

  • Kim, SungHwan;Kim, Songi
    • Journal of Electrical Engineering and Technology
    • /
    • 제13권5호
    • /
    • pp.2107-2115
    • /
    • 2018
  • Over recent years, high-concentration of particulate matters (e.g., a.k.a. fine dust) in South Korea has increasingly evoked considerable concerns about public health. It is intractable to track and report $PM_{10}$ measurements to the public on a real-time basis. Even worse, such records merely amount to averaged particulate concentration at particular regions. Under this circumstance, people are prone to being at risk at rapidly dispersing air pollution. To address this challenge, we attempt to build a predictive model via deep learning to the concentration of particulates ($PM_{10}$). The proposed method learns a binary decision rule on the basis of video sequences to predict whether the level of particulates ($PM_{10}$) in real time is harmful (>$80{\mu}g/m^3$) or not. To our best knowledge, no vision-based $PM_{10}$ measurement method has been proposed in atmosphere research. In experimental studies, the proposed model is found to outperform other existing algorithms in virtue of convolutional deep learning networks. In this regard, we suppose this vision based-predictive model has lucrative potentials to handle with upcoming challenges related to particulate measurement.

Bootstrapping Logit Model

  • Kim, Dae-hak;Jeong, Hyeong-Chul
    • Communications for Statistical Applications and Methods
    • /
    • 제9권1호
    • /
    • pp.281-289
    • /
    • 2002
  • In this paper, we considered an application of the bootstrap method for logit model. Estimation of type I error probability, the bootstrap p-values and bootstrap confidence intervals of parameter were proposed. Small sample Monte Carlo simulation were conducted in order to compare proposed method with existing normal theory based asymptotic method.

A Comparative Study of Microarray Data with Survival Times Based on Several Missing Mechanism

  • Kim Jee-Yun;Hwang Jin-Soo;Kim Seong-Sun
    • Communications for Statistical Applications and Methods
    • /
    • 제13권1호
    • /
    • pp.101-111
    • /
    • 2006
  • One of the most widely used method of handling missingness in microarray data is the kNN(k Nearest Neighborhood) method. Recently Li and Gui (2004) suggested, so called PCR(Partial Cox Regression) method which deals with censored survival times and microarray data efficiently via kNN imputation method. In this article, we try to show that the way to treat missingness eventually affects the further statistical analysis.

Application of Bootstrap Method for Change Point Test based on Kernel Density Estimator

  • Kim, Dae-Hak
    • Journal of the Korean Data and Information Science Society
    • /
    • 제15권1호
    • /
    • pp.107-117
    • /
    • 2004
  • Change point testing problem is considered. Kernel density estimators are used for constructing proposed change point test statistics. The proposed method can be used to the hypothesis testing of not only parameter change but also distributional change. Bootstrap method is applied to get the sampling distribution of proposed test statistic. Small sample Monte Carlo Simulation were also conducted in order to show the performance of proposed method.

  • PDF

순수 가연성액쳬의 인화점추산 -I. 알코올- (Estimation of Flash Points of Pure Flammable Liquids -I. Alcohols-)

  • 하동명;이수경;김문갑
    • 한국안전학회지
    • /
    • 제8권2호
    • /
    • pp.39-43
    • /
    • 1993
  • The flash points of flammable liquids are a fundamental and important property relative to fire and explosion hazards. A new estimation method, based on statistics (mutiple regression analysis), is being developed for the prediction of flash points of pure flammable liquids by means of computer simulation. This method has been applied to alcohol liquids. The proposed method has proved to be the general method for predicting the flash points of alcohol liquids.

  • PDF

Consideration of a structural-change point in the chain-ladder method

  • Kwon, Hyuk Sung;Vu, Uy Quoc
    • Communications for Statistical Applications and Methods
    • /
    • 제24권3호
    • /
    • pp.211-226
    • /
    • 2017
  • The chain-ladder method, for which run-off data is employed is popularly used in the rate-adjustment and loss-reserving practices of non-life-insurance and health-insurance companies. The method is applicable when the underlying assumption of a consistent development pattern is in regards to a cumulative loss payment after the occurrence of an insurance event. In this study, a modified chain-ladder algorithm is proposed for when the assumption is considered to be only partially appropriate for the given run-off data. The concept of a structural-change point in the run-off data and its reflection in the estimation of unpaid loss amounts are discussed with numerical illustrations. Experience data from private health insurance coverage in Korea were analyzed based on the suggested method. The performance in estimation of loss reserve was also compared with traditional approaches. We present evidence in this paper that shows that a reflection of a structural-change point in the chain-ladder method can improve the risk management of the relevant insurance products. The suggested method is expected to be utilized easily in actuarial practice as the algorithm is straightforward.

소표본 자기상관 자료의 분산 추정을 위한 최적 부분군 크기에 대한 연구 (A Study on Optimal Subgroup Size in Estimating Variance of Small Autocorrelated Samples)

  • 이종선;이재준;배순희
    • 품질경영학회지
    • /
    • 제35권2호
    • /
    • pp.106-112
    • /
    • 2007
  • In statistical process control, it is assumed that the process data are independent. However, most of chemical processes such as semi-conduct processes do not satisfy the assumption because of presence of autocorrelation between process data. It causes abnormal out of control signal in the process control and misleading estimation in process capability. In this study, we adopted Shore's method to solve the problem and propose an optimal subgroup size to estimate the variance correctly for AR(1) processes. Especially, we focus on finding an actual subgroup size for small samples based on simulation study.

Estimation for the Exponentiated Exponential Distribution Based on Multiply Type-II Censored Samples

  • Kang Suk-Bok;Park Sun-Mi
    • Communications for Statistical Applications and Methods
    • /
    • 제12권3호
    • /
    • pp.643-652
    • /
    • 2005
  • It has been known that the exponentiated exponential distribution can be used as a possible alternative to the gamma distribution or the Weibull distribution in many situations. But the maximum likelihood method does not admit explicit solutions when the sample is multiply censored. So we derive the approximate maximum likelihood estimators for the location and scale parameters in the exponentiated exponential distribution that are explicit function of order statistics. We also compare the proposed estimators in the sense of the mean squared error for various censored samples.

A Study on the Selection of Variogram Using Spatial Correlation

  • Shin, Key-Il;Back, Ki-Jung;Park, Jin-Mo
    • Communications for Statistical Applications and Methods
    • /
    • 제10권3호
    • /
    • pp.835-844
    • /
    • 2003
  • A difficulty in spatial data analysis is to choose a suitable theoretical variogram. Generally mean squares error(MSE) is used as a criterion of selection. However researchers encounter the case that the values of MSE are almost the same whereas the estimates of parameters are different. In this case, the selection criterion based on MSE should take into account the parameter estimates. In this paper we study on the method of selecting a variogram using spatial correlation.

A Comparison of Testing Methods for Equality of Survival Distributions with Interval Censored Data

  • Kim, Soo-Hwan;Lee, Shin-Jae;Lee, Jae-Won
    • 응용통계연구
    • /
    • 제25권3호
    • /
    • pp.423-434
    • /
    • 2012
  • A two-sample test for equality of survival distribution is one of the important issues in survival analysis, especially for clinical and epidemiological research. With interval censored data, some testing methods have been developed. This study introduces some testing methods and compares them under various situations through simulation study. Based on simulation result, it provides some useful information on choosing the most appropriate testing method in a given situation.