• Title/Summary/Keyword: 점근분포

Search Result 91, Processing Time 0.028 seconds

Analysis of Interfacial Surface Crack Perpendicular to the Surface (표면에 수직한 계면방향 표면균열의 해석)

  • 최성렬
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.17 no.2
    • /
    • pp.277-284
    • /
    • 1993
  • Interfacial surface crack perpendicular to the surface, which is imbedded into bonded quarter planes under single anti-plane shear load is analyzed. The problem is formulated using Mellin transform, form which single Wiener-Hopf equation is derived. By solving the equation stress intensity factor is obtained in closed form. This solution can be used as a Green's function to generate the solutions of other problems with the same geometry but of different loading conditions.

A Test for Nonlinear Causality and Its Application to Money, Production and Prices (통화(通貨)·생산(生産)·물가(物價)의 비선형인과관계(非線型因果關係) 검정(檢定))

  • Baek, Ehung-gi
    • KDI Journal of Economic Policy
    • /
    • v.13 no.4
    • /
    • pp.117-140
    • /
    • 1991
  • The purpose of this paper is primarily to introduce a nonparametric statistical tool developed by Baek and Brock to detect a unidirectional causal ordering between two economic variables and apply it to interesting macroeconomic relationships among money, production and prices. It can be applied to any other causal structure, for instance, defense spending and economic performance, stock market index and market interest rates etc. A key building block of the test for nonlinear Granger causality used in this paper is the correlation. The main emphasis is put on nonlinear causal structure rather than a linear one because the conventional F-test provides high power against the linear causal relationship. Based on asymptotic normality of our test statistic, the nonlinear causality test is finally derived. Size of the test is reported for some parameters. When it is applied to a money, production and prices model, some evidences of nonlinear causality are found by the corrected size of the test. For instance, nonlinear causal relationships between production and prices are demonstrated in both directions, however, these results were ignored by the conventional F-test. A similar results between money and prices are obtained at high lag variables.

  • PDF

Derivation of Asymptotic Formulas for the Signal-to-Noise Ratio of Mismatched Optimal Laplacian Quantizers (불일치된 최적 라플라스 양자기의 신호대잡음비 점근식의 유도)

  • Na, Sang-Sin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.5C
    • /
    • pp.413-421
    • /
    • 2008
  • The paper derives asymptotic formulas for the MSE distortion and the signal-to-noise ratio of a mismatched fixed-rate minimum MSE Laplacian quantizer. These closed-form formulas are expressed in terms of the number N of quantization points, the mean displacement $\mu$, and the ratio $\rho$ of the standard deviation of the source to that for which the quantizer is optimally designed. Numerical results show that the principal formula is accurate in that, for rate R=$log_2N{\geq}6$, it predicts signal-to-noise ratios within 1% of the true values for a wide range of $\mu$, and $\rho$. The new findings herein include the fact that, for heavy variance mismatch of ${\rho}>3/2$, the signal-to-noise ratio increases at the rate of $9/\rho$ dB/bit, which is slower than the usual 6 dB/bit, and the fact that an optimal uniform quantizer, though optimally designed, is slightly more than critically mismatched to the source. It is also found that signal-to-noise ratio loss due to $\mu$ is moderate. The derived formulas can be useful in quantization of speech or music signals, which are modeled well as Laplacian sources and have changing short-term variances.

Extreme Quantile Estimation of Losses in KRW/USD Exchange Rate (원/달러 환율 투자 손실률에 대한 극단분위수 추정)

  • Yun, Seok-Hoon
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.5
    • /
    • pp.803-812
    • /
    • 2009
  • The application of extreme value theory to financial data is a fairly recent innovation. The classical annual maximum method is to fit the generalized extreme value distribution to the annual maxima of a data series. An alterative modern method, the so-called threshold method, is to fit the generalized Pareto distribution to the excesses over a high threshold from the data series. A more substantial variant is to take the point-process viewpoint of high-level exceedances. That is, the exceedance times and excess values of a high threshold are viewed as a two-dimensional point process whose limiting form is a non-homogeneous Poisson process. In this paper, we apply the two-dimensional non-homogeneous Poisson process model to daily losses, daily negative log-returns, in the data series of KBW/USD exchange rate, collected from January 4th, 1982 until December 31 st, 2008. The main question is how to estimate extreme quantiles of losses such as the 10-year or 50-year return level.

A Novel Methodology of Improving Stress Prediction via Saint-Venant's Principle (생브낭의 원리를 이용한 응력해석 개선)

  • Kim, Jun-Sik;Cho, Maeng-Hyo
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.24 no.2
    • /
    • pp.149-156
    • /
    • 2011
  • In this paper, a methodology is proposed to improve the stress prediction of plates via Saint Venant's principle. According to Saint Venant's principle, the stress resultants can be used to describe linear elastic problems. Many engineering problems have been analyzed by Euler-Bernoulli beam(E-B) and/or Kirchhoff-Love(K-L) plate models. These models are asymptotically correct, and therefore, their accuracy is mathematically guaranteed for thin plates or slender beams. By post-processing their solutions, one can improve the stresses and displacements via Saint Venant's principle. The improved in-plane and out-of-plane displacements are obtained by adding the perturbed deflection and integrating the transverse shear strains. The perturbed deflection is calculated by applying the equivalence of stress resultants before and after post-processing(or Saint Venant's principle). Accuracy and efficiency of the proposed methodology is verified by comparing the solutions obtained with the elasticity solutions for orthotropic beams.

Calculation of Diffraction Patterns for Incidence of Planewave on Both Sides of a Dielectric Wedge by Using Multipole Expansion (쇄기형 유전체의 양면에 평면파 입사시 다극전개를 이용한 회절패턴 계산)

  • Kim, Se-Yun;Ra, Jung-Woong;Shin Sang-Yung
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.4
    • /
    • pp.16-26
    • /
    • 1989
  • Diffraction patterns of electromagnetic fields for the incidence of E-polarized plane wave on both interfaces of an arbitrary-angle dielect wedge are obtained by sum of geometric optics term and the edge diffracted fields. The diffraction coefficients of the edge diffracted fields are evaluated by employing the physical optics approximation and then correcting its error with the multipole line source at the dielectric edge. For the wedge angle $120^{circ}$, the incident angle $60^{circ}$, the relative dielectric constant of the dielectric wedge, 2, 5, and 10, and the observation distance from the tip of the wedge, 5 and 10 wavelength, the diffraction coefficients and the diffraction patterns corresponding to geometric optics, physical optics, and the solution corrected by the multipole line source are plotted, respectively. While the corrected solutions presented in this paper are valid only in the far-field region, these asymptotic solutions show to satisfy the boundary condition on the dielectric interfaces.

  • PDF

Outage Performance Analysis of Partial Relay Selection Based Opportunistic Cooperation in Decode-and-Forward Relaying Systems (디코딩 후 전달 중계 시스템에서 부분 중계 노드 선택 기법 기반 기회적 협력 방식의 아웃티지 성능 분석)

  • Lee, Sangjun;Lee, In-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.8
    • /
    • pp.1804-1810
    • /
    • 2013
  • In this paper, we study the opportunistic cooperation scheme that improves the outage performance through the efficient selection between a cooperative mode and a non-cooperative mode. Especially, in decode-and-forward relaying systems, we analyze the outage performance for the opportunistic cooperation using partial relay selection, where closed-form expressions of exact and asymptotic outage probabilities are derived assuming independent and identically distributed Rayleigh fading channels. In the numerical results, we verify the derived expressions, and investigate the outage performances for various target data rates and different numbers of relays. Also, we compare the outage performances of the conventional cooperation scheme and the opportunistic cooperation scheme.

Permutation test for a post selection inference of the FLSA (순열검정을 이용한 FLSA의 사후추론)

  • Choi, Jieun;Son, Won
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.6
    • /
    • pp.863-874
    • /
    • 2021
  • In this paper, we propose a post-selection inference procedure for the fused lasso signal approximator (FLSA). The FLSA finds underlying sparse piecewise constant mean structure by applying total variation (TV) semi-norm as a penalty term. However, it is widely known that this convex relaxation can cause asymptotic inconsistency in change points detection. As a result, there can remain false change points even though we try to find the best subset of change points via a tuning procedure. To remove these false change points, we propose a post-selection inference for the FLSA. The proposed procedure applies a permutation test based on CUSUM statistic. Our post-selection inference procedure is an extension of the permutation test of Antoch and Hušková (2001) which deals with single change point problems, to multiple change points detection problems in combination with the FLSA. Numerical study results show that the proposed procedure is better than naïve z-tests and tests based on the limiting distribution of CUSUM statistics.

Performance of a Bayesian Design Compared to Some Optimal Designs for Linear Calibration (선형 캘리브레이션에서 베이지안 실험계획과 기존의 최적실험계획과의 효과비교)

  • 김성철
    • The Korean Journal of Applied Statistics
    • /
    • v.10 no.1
    • /
    • pp.69-84
    • /
    • 1997
  • We consider a linear calibration problem, $y_i = $$\alpha + \beta (x_i - x_0) + \epsilon_i$, $i=1, 2, {\cdot}{\cdot},n$ $y_f = \alpha + \beta (x_f - x_0) + \epsilon, $ where we observe $(x_i, y_i)$'s for the controlled calibration experiments and later we make inference about $x_f$ from a new observation $y_f$. The objective of the calibration design problem is to find the optimal design $x = (x_i, \cdots, x_n$ that gives the best estimates for $x_f$. We compare Kim(1989)'s Bayesian design which minimizes the expected value of the posterior variance of $x_f$ and some optimal designs from literature. Kim suggested the Bayesian optimal design based on the analysis of the characteristics of the expected loss function and numerical must be equal to the prior mean and that the sum of squares be as large as possible. The designs to be compared are (1) Buonaccorsi(1986)'s AV optimal design that minimizes the average asymptotic variance of the classical estimators, (2) D-optimal and A-optimal design for the linear regression model that optimize some functions of $M(x) = \sum x_i x_i'$, and (3) Hunter & Lamboy (1981)'s reference design from their paper. In order to compare the designs which are optimal in some sense, we consider two criteria. First, we compare them by the expected posterior variance criterion and secondly, we perform the Monte Carlo simulation to obtain the HPD intervals and compare the lengths of them. If the prior mean of $x_f$ is at the center of the finite design interval, then the Bayesian, AV optimal, D-optimal and A-optimal designs are indentical and they are equally weighted end-point design. However if the prior mean is not at the center, then they are not expected to be identical.In this case, we demonstrate that the almost Bayesian-optimal design was slightly better than the approximate AV optimal design. We also investigate the effects of the prior variance of the parameters and solution for the case when the number of experiments is odd.

  • PDF

The Application of Species Richness Estimators and Species Accumulation Curves to Traditional Ethnobotanical Knowledges in South Korea (남한지역 전통민속식물지식 자료를 활용한 종누적곡선 분석 및 종풍부도 추정 연구)

  • Park, Yuchul;Chang, Kae Sun;Kim, Hui
    • Korean Journal of Plant Resources
    • /
    • v.30 no.5
    • /
    • pp.481-488
    • /
    • 2017
  • Under circumstances of rapid disappearing of traditional ethnobotanical knowledge, traditional ethnobotanical knowledge surveys are the major step in documenting useful species with a conservation priority. In the ethnobotanical research, the relevance to the survey intensity, ethnobotanical information and plant species richness is the most important research theme. We made up TEK database in south Korea using metadata which had been published by the Korea National Arboretum. We calculated species richness using species richness estimator like ACE, Chao1, Chao2, ICE, Jack 1, Jack 2, and Bootstrap. Species accumulation curves showed each province sampling efforts appeared to be wide range of variance so that Gangwon province need more sampling efforts, and Chungnam province approached a horizontal asymptote earlier. We found heterogeneous patterns in the rarefaction curves of TEK species between gender for each categories of use (medicinal, food and handicrafts). Comparing with regional floral diversities, it was predicted that more diverse species would be found in some provinces by carrying out additional survey.