• Title/Summary/Keyword: 절단정규분포

Search Result 24, Processing Time 0.021 seconds

Exceedance probability of allowable sliding distance of caisson breakwaters in Korea (국내 케이슨 방파제의 허용활동량 초과확률)

  • Kim, Seung-Woo;Suh, Kyung-Duck
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.21 no.6
    • /
    • pp.495-507
    • /
    • 2009
  • The expected sliding distance for the lifetime of a caisson breakwater has a limitation to be used as the stability criterion of the breakwater. Since the expected sliding distance is calculated as the mean of simulated sliding distances for the lifetime, there is possibility for the actual sliding distance to exceed the expected sliding distance. To overcome this problem, the exceedance probability of the allowable sliding distance is used to assess the stability of sliding. Latin Hypercube sampling and Crude Monte Carlo simulation were used to calculate the exceedance probability. The doubly-truncated normal distribution was considered to complement the physical disadvantage of the normal distribution as the random variable distribution. In the case of using the normal distribution, the cross-sections of Okgye, Hwasun, and Donghae NI before reinforcement were found to be unstable in all the limit states. On the other hand, when applying the doubly-truncated normal distribution, the cross-sections of Hwasun and Donghae NI before reinforcement were evaluated to be unstable in the repairable limit state and all the limit states, respectively. Finally, the shortcoming of the expected sliding distance as the stability criterion was investigated, and we reasonably assessed the stability of sliding of caissons by using the exceedance probability of allowable sliding distance for the caisson breakwaters in Korea.

The Shapiro-Wilk Type Test for Exponentiality Based on Progressively Type II Censored Data (전진 제 2종 중도절단자료에 대한 Shapiro-Wilk 형태의 지수검정)

  • Kim, Nam-Hyun
    • The Korean Journal of Applied Statistics
    • /
    • v.23 no.3
    • /
    • pp.487-495
    • /
    • 2010
  • This paper develops a goodness of fit test statistic to test if the progressively Type II censored sample comes from an exponential distribution with origin known. The test is based on normalizing spacings and Stephens (1978)' modified Shapiro and Wilk (1972) test for exponentiality. The modification is for the case where the origin is known. We applied the same modification to Kim (2001a)'s statistic, which is based on the ratio of two asymptotically efficient estimates of scale. The simulation results show that Kim (2001a)'s statistic has higher power than Stephens' modified Shapiro and Wilk statistic for almost all cases.

Bayesian Model Selection of Lifetime Models using Fractional Bayes Factor with Type ?$\pm$ Censored Data (제2종 중단모형에서 FRACTIONAL BAYES FACTOR를 이용한 신뢰수명 모형들에 대한 베이지안 모형선택)

  • 강상길;김달호;이우동
    • The Korean Journal of Applied Statistics
    • /
    • v.13 no.2
    • /
    • pp.427-436
    • /
    • 2000
  • In this paper, we consider a Bayesian model selection problem of lifetime distributions using fractional Bayes factor with noninformative prior when type II censored data are given. For a given type II censored data, we calculate the posterior probability of exponential, Weibull and lognormal distributions and select the model which gives the highest posterior probability. Our proposed methodology is explained and applied to real data and simulated data.

  • PDF

On principal component analysis for interval-valued data (구간형 자료의 주성분 분석에 관한 연구)

  • Choi, Soojin;Kang, Kee-Hoon
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.1
    • /
    • pp.61-74
    • /
    • 2020
  • Interval-valued data, one type of symbolic data, are observed in the form of intervals rather than single values. Each interval-valued observation has an internal variation. Principal component analysis reduces the dimension of data by maximizing the variance of data. Therefore, the principal component analysis of the interval-valued data should account for the variance between observations as well as the variation within the observed intervals. In this paper, three principal component analysis methods for interval-valued data are summarized. In addition, a new method using a truncated normal distribution has been proposed instead of a uniform distribution in the conventional quantile method, because we believe think there is more information near the center point of the interval. Each method is compared using simulations and the relevant data set from the OECD. In the case of the quantile method, we draw a scatter plot of the principal component, and then identify the position and distribution of the quantiles by the arrow line representation method.

Initialization by using truncated distributions in artificial neural network (절단된 분포를 이용한 인공신경망에서의 초기값 설정방법)

  • Kim, MinJong;Cho, Sungchul;Jeong, Hyerin;Lee, YungSeop;Lim, Changwon
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.5
    • /
    • pp.693-702
    • /
    • 2019
  • Deep learning has gained popularity for the classification and prediction task. Neural network layers become deeper as more data becomes available. Saturation is the phenomenon that the gradient of an activation function gets closer to 0 and can happen when the value of weight is too big. Increased importance has been placed on the issue of saturation which limits the ability of weight to learn. To resolve this problem, Glorot and Bengio (Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249-256, 2010) claimed that efficient neural network training is possible when data flows variously between layers. They argued that variance over the output of each layer and variance over input of each layer are equal. They proposed a method of initialization that the variance of the output of each layer and the variance of the input should be the same. In this paper, we propose a new method of establishing initialization by adopting truncated normal distribution and truncated cauchy distribution. We decide where to truncate the distribution while adapting the initialization method by Glorot and Bengio (2010). Variances are made over output and input equal that are then accomplished by setting variances equal to the variance of truncated distribution. It manipulates the distribution so that the initial values of weights would not grow so large and with values that simultaneously get close to zero. To compare the performance of our proposed method with existing methods, we conducted experiments on MNIST and CIFAR-10 data using DNN and CNN. Our proposed method outperformed existing methods in terms of accuracy.

A Study on the Optimal Cut-off Point in the Cut-off Sampling Method (절사표본에서 최적 절사점에 관한 연구)

  • Lee, Sang Eun;Cho, Min Ji;Shin, Key-Il
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.3
    • /
    • pp.501-512
    • /
    • 2014
  • Modified cut-off sampling is widely used for highly skewed data. A serious drawback of modified cut-off sampling is the difficulty of adjustment of non-response in take-all stratum. Therefore, solutions of the problems of non-response in take-all stratum have been studied in various ways such as substitute of samples, imputation or re-weight method. In this paper, a new cut-off point based on minimizing MSE being used in exponential and power functions is suggested and it can be reduced the number of take-all stratum. We also investigate another cut-off point determination method with underlying distributions such as truncated log-normal and truncated gamma distributions. Finally we suggest the optimal cut-off point which has a minimum of take-all stratum size among suggested methods. Simulation studies are performed and Labor Survey data and simulated data are used for the case study.

AUC and VUS using truncated distributions (절단함수를 이용한 AUC와 VUS)

  • Hong, Chong Sun;Hong, Seong Hyuk
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.4
    • /
    • pp.593-605
    • /
    • 2019
  • Significant literature exists on the area under the ROC curve (AUC) and the volume under the ROC surface (VUS) which are statistical measures of the discriminant power of classification models. Whereas the partial AUC is restricted on the false positive rate, the two-way partial AUC is restricted on both the false positive rate and true positive rate, which could be more efficient and accurate than partial AUC. The two-way partial AUC was suggested as more efficient and accurate than the partial AUC. Partial VUS as well as the three-way partial VUS were also developed for the ROC surface. A proposed AUC is expressed in this paper with probability and integration using two truncated distribution functions restricted on both the false positive rate and true positive rate. It is also found that this AUC has a relation with the two-way partial AUC. The three-way partial VUS for the ROC surface is also related to the VUS using truncated distribution functions. These AUC and VUS are represented and estimated in terms of Mann-Whitney statistics. Their parametric and non-parametric estimation methods are explored based on normal distributions and random samples.

A Comparative Study of the Relationship between Port Effeciency and Ownership Structure (항만 소유구조에 따른 효율성 모형 비교연구)

  • Hwang, Jin-Soo;Jorn, Hong-Suk;Kan, Sung-Chan
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.6
    • /
    • pp.1167-1176
    • /
    • 2009
  • Few studies have investigated the quantitative relationship between port ownership structure and port efficiency with mixed results. This paper therefore contributes to the empirical literature by investigating the impact of port privatization on port efficiency using sample data drawn from the world's major ports. Moreover, this study applies the Bayesian approach to estimate the impact of port ownership on port efficiency. We fit Bayesian stochastic frontier model which is introduced by Griffin and Steel (2007) by WinBUGS. World's 25 main ports data are used for analysis. Based on MCMC sampling, we estimate parameters of the model and efficiency index of each ports. Moreover, we add estimates from package Frontier 4.1c in order to compare them with Bayesian results.

Study on the Statistical Optimum Model of Simple Linear Regression to Estimate the Purchasing Price of Diamond (다이아몬드 구매가격 예측을 위한 통계적 단순 선형회기 최적화 모형에 관한 연구)

  • 이영욱
    • The Journal of Information Technology
    • /
    • v.3 no.1
    • /
    • pp.37-44
    • /
    • 2000
  • The purchasing estimate price of diamond is affected by the factors of carat, color, clarity, certificate, cut and price with the unit of $/carat. The object of this study is to obtain the linear regression model for such purchasing estimate price and to test statistically. The optimum model is the simple regression model of $^y{\;}:{\;}10^2{\;}/{\;}(-1.5575{\;}+{\;}0.3099{\;}logx){\;}+{\;}{\varepsilon}$ statistically satisfied by the lack of fit test and has the characteristics of normality, constant variance and symmetry.

  • PDF

Estimation of the survival function of the legislative process in Korea: based on the experiences of the 17th, 18th, and 19th National Assembly of Korea (국회 법안 검토 기간의 생존함수 추정: 제 17, 18, 19대 국회의 사례를 바탕으로)

  • Yun, Yeonggyu;Cho, Yunsoo;Jung, Hye-Young
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.4
    • /
    • pp.503-515
    • /
    • 2019
  • In this study we estimate the survival function of duration of the legislative processes in the 17th, 18th, and 19th National Assembly of Korea, and further analyze effects of the political situation variables on the legislative process. We define the termination of legislative process from a novel perspective to alleviate issues of dependency between censoring and failure in the data. We also show that the proportional hazards assumption does not hold for the data, and analyze data employing a log-normal accelerated failure time model. The policy areas of law agendas are shown to affect the speed of legislative process in different ways and legislative process tends to be prompt in times of divided governments.