• Title/Summary/Keyword: 우도원리

Search Result 6, Processing Time 0.02 seconds

Design-based and model-based Inferences in Survey Sampling (표본조사에서 설계기반추론과 모형기반추론)

  • Kim Kyu-Seong
    • The Korean Journal of Applied Statistics
    • /
    • v.18 no.3
    • /
    • pp.673-687
    • /
    • 2005
  • We investigate both the design-based and model-based inferences, which are usual inferential methods in survey sampling. While the design-based inference is on the basis of randomization principle, The motel-based inference is based on likelihood principle as well as conditionality principle. There have been some disputes between two inferences for a long time and those have not yet been determined. In this paper we reviewed some issues on two inferences and compared their advantages and disadvantages in some viewpoints.

A Study on Analysis of Likelihood Principle and its Educational Implications (우도원리에 대한 분석과 그에 따른 교육적 시사점에 대한 연구)

  • Park, Sun Yong;Yoon, Hyoung Seok
    • The Mathematical Education
    • /
    • v.55 no.2
    • /
    • pp.193-208
    • /
    • 2016
  • This study analyzes the likelihood principle and elicits an educational implication. As a result of analysis, this study shows that Frequentist and Bayesian interpret the principle differently by assigning different role to that principle from each other. While frequentist regards the principle as 'the principle forming a basis for statistical inference using the likelihood ratio' through considering the likelihood as a direct tool for statistical inference, Bayesian looks upon the principle as 'the principle providing a basis for statistical inference using the posterior probability' by looking at the likelihood as a means for updating. Despite this distinction between two methods of statistical inference, two statistics schools get clues to compromise in a regard of using frequency prior probability. According to this result, this study suggests the statistics education that is a help to building of students' critical eye by their comparing inferences based on likelihood and posterior probability in the learning and teaching of updating process from frequency prior probability to posterior probability.

내부적 독립성에 대한 기하적 검정통계량

  • 김기영;전명식;이광진
    • Communications for Statistical Applications and Methods
    • /
    • v.2 no.1
    • /
    • pp.166-175
    • /
    • 1995
  • 내부적 독립성 가설에 대해 전통적인 우도비원리 하에서 나온 검정통계량과 합교원리하에서 나온 검정통계량들에 대한 자료분석적인 측면에서의 대안으로서 기하적 관점에서 유래된 하나의 heuristic 검정통계량이 제안된다. 아울러 기존 검정통계량들의 기하적 의미들도 살펴보았다. 나아가 제안된 검정통계량의 특성 및 점근분포를 유도하였으며, 모의 실험을 통하여 기존 검정통계량들과의 검정력을 비교한다.

  • PDF

Understanding Bayesian Experimental Design with Its Applications (베이지안 실험계획법의 이해와 응용)

  • Lee, Gunhee
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.6
    • /
    • pp.1029-1038
    • /
    • 2014
  • Bayesian experimental design is a useful concept in applied statistics for the design of efficient experiments especially if prior knowledge in the experiment is available. However, a theoretical or numerical approach is not simple to implement. We review the concept of a Bayesian experiment approach for linear and nonlinear statistical models. We investigate relationships between prior knowledge and optimal design to identify Bayesian experimental design process characteristics. A balanced design is important if we do not have prior knowledge; however, prior knowledge is important in design and expert opinions should reflect an efficient analysis. Care should be taken if we set a small sample size with a vague improper prior since both Bayesian design and non-Bayesian design provide incorrect solutions.

Vocabulary Recognition Model using a convergence of Likelihood Principla Bayesian methode and Bhattacharyya Distance Measurement based on Vector Model (벡터모델 기반 바타챠랴 거리 측정 기법과 우도 원리 베이시안을 융합한 어휘 인식 모델)

  • Oh, Sang-Yeob
    • Journal of Digital Convergence
    • /
    • v.13 no.11
    • /
    • pp.165-170
    • /
    • 2015
  • The Vocabulary Recognition System made by recognizing the standard vocabulary is seen as a decline of recognition when out of the standard or similar words. The vector values of the existing system to the model created by configuring the database was used in the recognition vocabulary. The model to be formed during the search for the recognition vocabulary is recognizable because there is a disadvantage not configured with a database. In this paper, it induced to recognize the vector model is formed by the search and configuration using a Bayesian model recognizes the Bhattacharyya distance measurement based on the vector model, by applying the Wiener filter improves the recognition rate. The result of Convergence of two method's are improved reliability experiments for distance measurement. Using a proposed measurement are compared to the conventional method exhibited a performance of 98.2%.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.