• Title/Summary/Keyword: 분위수

Search Result 82, Processing Time 0.031 seconds

Base-stock Policies for N-stage Serial Inventory Systems with a Normal Distribution (정규분포를 갖는 N차 시리얼 시스템에서의 기초 재고 정책)

  • 김준석;권익현;김성식
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2004.10a
    • /
    • pp.578-581
    • /
    • 2004
  • 본 연구에서는 수요가 정규분포의 형태를 따르는 N차 시리얼(serial) 시스템을 대상으로 한다. 최상위 노드는 상위의 공급자로부터 받고자 하는 물량을 제한 없이 받을 수 있으며 하위 노드로 이러한 물량을 공급하게 된다. 최하위 노드에서는 고객의 직접적인 수요가 발생하고 만족시키지 못한 수요는 다음 기간으로 이월된다. 이러한 환경 하에서 전체시스템에서 발생하는 재고유지 비용(holding cost)과 재고이월 비용(backorder cost)의 합을 최소화하는 각 노드별 최적의 기초 재고 수준(base stock level)을 결정하는 문제를 다룬다. 본 논문에서는 모의실험과 계층 재고(echelon stock)의 개념을 통해 수요 분포 내의 적절한 분위수(quantile)를 결정하는 접근방법으로 각 노드의 기초 재고 수준을 구하는 방안을 제시하고자 한다.

  • PDF

Power Analysis for Normality Plots (정규성 그래프의 검정력 비교)

  • Lee, Jae-Young;Rhee, Seong-Won
    • Journal of the Korean Data and Information Science Society
    • /
    • v.10 no.2
    • /
    • pp.429-436
    • /
    • 1999
  • We suggest test statistics for normality using Q-Q plot and P-P plot and obtain empirical quantities of these statistics. Also the power comparison with Shapiro-Wilk's W is conducted by Monte Carlo study.

  • PDF

Effect of Firm's Activities on Their Performances (혁신활동이 기업의 경영성과에 미치는 영향)

  • Kim, Kwang-Doo;Hong, Woon-Sun
    • Journal of Korea Technology Innovation Society
    • /
    • v.14 no.2
    • /
    • pp.373-404
    • /
    • 2011
  • The purpose of research is to reveal the effect of innovation to enterprises' economic performance. The kind of this study has begun since 1960s and lively progressed then. The fmal theoretical result of the effect of innovation to the performance came positive in compare to the mixed results came out in empirical analysis. There are several reason why empirical results are different to the theoretical results. However the major factor is that of using imperfect statistics and inappropriateness of analysis method. This study used a population (1990~2008) provided from Korean Intellectual Property Office, KIPO for patent and also used a population (1990~2008) provided from Korea Investors Service, KIS for research and development. The contribution of this study is enormous statistical analysis. This study used principal component analysis made innovativeness index for appropriate index sampling, and made effort to minimize the error by using appropriate quantile regression for both to panel analysis and rapidly developed company analysis. Dividing the final results into two parts, the growth and the profit, the effect of technological innovation to the firm's growth is not significant to the panel analysis but heavily significant to the upper 10% of high growth firm. By classifying large company and small and medium enterprise, it is significant to upper 10% of high growth firm for large company and generally significant to small and medium enterprise. But for both lower 10% of low growth firms and 25% of low ranking firms are negatively effected, and for high growth firms larger than the medians are positively effected. Especially for upper 10% of high growth firms are mostly effected. It is more effective to the profitability than the growth. The effect to the profit for every enterprises are not significant, but effected significant to the larger enterprises than 25% of low ranking enterprises especially most effective to the upper 10% of high-profit enterprises. The analysis for the large company, it was significant and positively effected to the upper 10% of high profit enterprises and 25% of low ranking enterprises, but the negatively effected for the low-profit enterprises. For the small and medium enterprises, it is negatively effected for both 10% of low ranking enterprises and 25% of low ranking enterprises. However it is positively effective and significant for the high ranking enterprises than median, especially for those high growth firms. It is meaningful to recognize significancy by quantile, but more implicative result is to finding more effectiveness to the small and medium enterprises than to the large company.

  • PDF

Relations between Normal Serum Gamma-glutamyltransferase and Risk Factors of Coronary Heart Diseases according to Age and Gender (연령과 성별에 따른 정상 혈청 Gamma-glutamyltransferase와 관상동맥질환 위험인자와의 관계)

  • Kwon, Se Young;Na, Young Ak
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.48 no.1
    • /
    • pp.22-29
    • /
    • 2016
  • Serum gamma-glutamyltransferase (GGT) has been widely used as a marker of alcohol intake and liver failure. Recently, the relativity between GGT and various diseases has been identified with growing interest. In this study, we examined relativity between GGT value and risk factors of coronary heart diseases among those with normal GGT value, excluding heavy drinkers. Specifically, we compared the differences based on age and gender. Data from the 2011 KNHNES were used (N=3,619). When the subjects were categorized according to quartile based on the serum GGT levels, there was 10~20, 21~27, 28~38, 39~71 IU/L in men, and 6~12, 13~16, 17~22, 23~42 IU/L in women. The mean of most variables was the highest in the $4^{th}$ quartile (Q4), however age and LDL Cholesterol were the highest in the $2^{nd}$ quartile (Q2) in men. The FRS and 10-year CHD risk was the highest in the $2^{nd}$ quartile in men, and the highest in the $4^{th}$ quartile in women. Increased GGT was correspondingly linked with age in women but age was the highest in GGT in the $2^{nd}$ quartile in men. In the 70's, the highest Q1 and Q2 was in men and the highest Q3 and Q4 in women. Although GGT value was within the normal range, increased GGT showed correlation with various risk factors. The FRS and 10-year CHD risk showed different patterns according to age and gender along with increased GGT value.

A Study on the Automation Algorithm to Identify the Geological Lineament using Spatial Statistical Analysis (공간통계분석을 이용한 지질구조선 자동화 알고리즘 연구)

  • Kwon, O-Il;Kim, Woo-Seok;Kim, Jin-Hwan;Kim, Gyo-Won
    • The Journal of Engineering Geology
    • /
    • v.27 no.4
    • /
    • pp.367-376
    • /
    • 2017
  • Recently, tunneling under the seabed is becoming increasingly common in many countries. In Korea, there are proposals to tunnel from the mainland to Jeju Island. Safe construction requires geologic structures such as faults to be characterized during the design and construction phase; however, unlike on land, such structures are difficult to survey seabed. This study aims to develop an algorithm that uses geostatistics to automatically derive large-scale geological structures on the seabed. The most important considerations in this method are the optimal size of the moving window, the optimal type of spatial statistics, and determination of the optimal percentile standard. Finally, the optimal analysis algorithm was developed using the R program, which comprehensibly presents variations in spatial statistics. The program allows the type and percentile standard of spatial statistics to be specified by the user, thus enabling an analysis of the geological structure according to variations in spatial statistics. The geotechnical defense-training algorithm shows that a large, linear geological lineament is best visualized using a $3{\times}3$ moving window and a 10% upper standard based on the moving variance value and fractile. In particular, setting the fractile criterion to the upper 0.5% almost entirely eliminates the error values from the contour image.

A Bayesian Extreme Value Analysis of KOSPI Data (코스피 지수 자료의 베이지안 극단값 분석)

  • Yun, Seok-Hoon
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.5
    • /
    • pp.833-845
    • /
    • 2011
  • This paper conducts a statistical analysis of extreme values for both daily log-returns and daily negative log-returns, which are computed using a collection of KOSPI data from January 3, 1998 to August 31, 2011. The Poisson-GPD model is used as a statistical analysis model for extreme values and the maximum likelihood method is applied for the estimation of parameters and extreme quantiles. To the Poisson-GPD model is also added the Bayesian method that assumes the usual noninformative prior distribution for the parameters, where the Markov chain Monte Carlo method is applied for the estimation of parameters and extreme quantiles. According to this analysis, both the maximum likelihood method and the Bayesian method form the same conclusion that the distribution of the log-returns has a shorter right tail than the normal distribution, but that the distribution of the negative log-returns has a heavier right tail than the normal distribution. An advantage of using the Bayesian method in extreme value analysis is that there is nothing to worry about the classical asymptotic properties of the maximum likelihood estimators even when the regularity conditions are not satisfied, and that in prediction it is effective to reflect the uncertainties from both the parameters and a future observation.

A Study on Demand Selection in Supply Chain Distribution Planning under Service Level Constraints (서비스 수준 제약하의 공급망 분배계획을 위한 수요선택 방안에 관한 연구)

  • Park, Gi-Tae;Kim, Sung-Shick;Kwon, Ick-Hyun
    • Journal of the Korea Society for Simulation
    • /
    • v.15 no.3
    • /
    • pp.39-47
    • /
    • 2006
  • In most of supply chain planning practices, the estimated demands, which are forecasted for each individual period in a forecasting window, are regarded as deterministic. But, in reality, the forecasted demands for the periods of a given horizon are stochastically distributed. Instead of using a safety stock, this study considers a direct control of service level by choosing the demand used in planning from the distributed forecasted demand values for the corresponding period. Using the demand quantile and echelon stock concept, we propose a simple but efficient heuristic algorithm for multi-echelon serial systems under service level constraints. Through a comprehensive simulation study, the proposed algorithm was shown to be very accurate compared with the optimal solutions.

  • PDF

Testing for the Statistical Interrelationship between the Real Estate and the Stock Markets (부동산시장과 주식시장의 통계적 연관성 검정)

  • Kim, Tae-Ho
    • The Korean Journal of Applied Statistics
    • /
    • v.21 no.3
    • /
    • pp.497-508
    • /
    • 2008
  • As important markets have been closely connected in the opening and globalizing process, the instability in one market is increasingly possible to spread in other markets, which necessarily leads to careful investigations. In analyzing the short and the long run dynamics between the stock and the real estate markets, which are the two major investment options, this study conducts the statistical tests for the interrelationships between the two markets and the possibility of their substitution effect. In addition, the estimation results appear to be consistent with the simple causal relationship among the markets in the high interest rate period and the relatively complex relationship in the low interest rate period.

Dynamic analysis of financial market contagion (금융시장 전염 동적 검정)

  • Lee, Hee Soo;Kim, Tae Yoon
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.1
    • /
    • pp.75-83
    • /
    • 2016
  • We propose methodology to analyze the dynamic mechanisms of financial market contagion under market integration using a biological contagion analytical approach. We employ U-statistic to measure market integration, and a dynamic model based on an error correction mechanism (single equation error correction model) and latent factor model to examine market contagion. We also use quantile regression and Wald-Wolfowitz runs test to test market contagion. This methodology is designed to effectively handle heteroscedasticity and correlated errors. Our simulation results show that the single equation error correction model fits well with the linear regression model with a stationary predictor and correlated errors.

Data Cleansing Algorithm for reducing Outlier (데이터 오·결측 저감 정제 알고리즘)

  • Lee, Jongwon;Kim, Hosung;Hwang, Chulhyun;Kang, Inshik;Jung, Hoekyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.342-344
    • /
    • 2018
  • This paper shows the possibility to substitute statistical methods such as mean imputation, correlation coefficient analysis, graph correlation analysis for the proposed algorithm, and replace statistician for processing various abnormal data measured in the water treatment process with it. In addition, this study aims to model a data-filtering system based on a recent fractile pattern and a deep learning-based LSTM algorithm in order to improve the reliability and validation of the algorithm, using the open-sourced libraries such as KERAS, THEANO, TENSORFLOW, etc.

  • PDF