• Title/Summary/Keyword: stationary random sequence

Search Result 37, Processing Time 0.023 seconds

Monte-Carlo simulation of earthquake sequence in the time and magnitude space (시간 및 규모 영역에서 지진 발생의 몬테-카를로 가상 수치 계산)

  • 박창업;신진수
    • The Journal of Engineering Geology
    • /
    • v.2 no.2
    • /
    • pp.147-154
    • /
    • 1992
  • A computer simulation of earthquake sequence in the time and magnitude space was done using random number generation. The theory of the simulation are based on the two statistical models of earthquake events. Those models are Stationary Poisson Process for independent earthquakes and Branching Markov Process for aftershocks. The generated earthquake sequnces resemble the actual earthquake catalogs.

  • PDF

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES

  • Fu, Ke-Ang;Hu, Li-Hua
    • Journal of the Korean Mathematical Society
    • /
    • v.47 no.2
    • /
    • pp.263-275
    • /
    • 2010
  • Let {$X_n;n\;\geq\;1$} be a strictly stationary sequence of negatively associated random variables with mean zero and finite variance. Set $S_n\;=\;{\sum}^n_{k=1}X_k$, $M_n\;=\;max_{k{\leq}n}|S_k|$, $n\;{\geq}\;1$. Suppose $\sigma^2\;=\;EX^2_1+2{\sum}^\infty_{k=2}EX_1X_k$ (0 < $\sigma$ < $\infty$). We prove that for any b > -1/2, if $E|X|^{2+\delta}$(0<$\delta$$\leq$1), then $$lim\limits_{\varepsilon\searrow0}\varepsilon^{2b+1}\sum^{\infty}_{n=1}\frac{(loglogn)^{b-1/2}}{n^{3/2}logn}E\{M_n-\sigma\varepsilon\sqrt{2nloglogn}\}_+=\frac{2^{-1/2-b}{\sigma}E|N|^{2(b+1)}}{(b+1)(2b+1)}\sum^{\infty}_{k=0}\frac{(-1)^k}{(2k+1)^{2(b+1)}}$$ and for any b > -1/2, $$lim\limits_{\varepsilon\nearrow\infty}\varepsilon^{-2(b+1)}\sum^{\infty}_{n=1}\frac{(loglogn)^b}{n^{3/2}logn}E\{\sigma\varepsilon\sqrt{\frac{\pi^2n}{8loglogn}}-M_n\}_+=\frac{\Gamma(b+1/2)}{\sqrt{2}(b+1)}\sum^{\infty}_{k=0}\frac{(-1)^k}{(2k+1)^{2b+2'}}$$, where $\Gamma(\cdot)$ is the Gamma function and N stands for the standard normal random variable.

PRECISE ASYMPTOTICS FOR THE MOMENT CONVERGENCE OF MOVING-AVERAGE PROCESS UNDER DEPENDENCE

  • Zang, Qing-Pei;Fu, Ke-Ang
    • Bulletin of the Korean Mathematical Society
    • /
    • v.47 no.3
    • /
    • pp.585-592
    • /
    • 2010
  • Let {$\varepsilon_i:-{\infty}$$\infty$} be a strictly stationary sequence of linearly positive quadrant dependent random variables and $\sum\limits\frac_{i=-{\infty}}^{\infty}|a_i|$<$\infty$. In this paper, we prove the precise asymptotics in the law of iterated logarithm for the moment convergence of moving-average process of the form $X_k=\sum\limits\frac_{i=-{\infty}}^{\infty}a_{i+k}{\varepsilon}_i,k{\geq}1$

BERRY-ESSEEN BOUNDS OF RECURSIVE KERNEL ESTIMATOR OF DENSITY UNDER STRONG MIXING ASSUMPTIONS

  • Liu, Yu-Xiao;Niu, Si-Li
    • Bulletin of the Korean Mathematical Society
    • /
    • v.54 no.1
    • /
    • pp.343-358
    • /
    • 2017
  • Let {$X_i$} be a sequence of stationary ${\alpha}-mixing$ random variables with probability density function f(x). The recursive kernel estimators of f(x) are defined by $$\hat{f}_n(x)={\frac{1}{n\sqrt{b_n}}{\sum_{j=1}^{n}}b_j{^{-\frac{1}{2}}K(\frac{x-X_j}{b_j})\;and\;{\tilde{f}}_n(x)={\frac{1}{n}}{\sum_{j=1}^{n}}{\frac{1}{b_j}}K(\frac{x-X_j}{b_j})$$, where 0 < $b_n{\rightarrow}0$ is bandwith and K is some kernel function. Under appropriate conditions, we establish the Berry-Esseen bounds for these estimators of f(x), which show the convergence rates of asymptotic normality of the estimators.

On the Estimation of the Empirical Distribution Function for Negatively Associated Processes

  • Kim, Tae-Sung;Lee, Seung-Woo;Ko, Mi-Hwa
    • Communications for Statistical Applications and Methods
    • /
    • v.8 no.1
    • /
    • pp.229-235
    • /
    • 2001
  • Let {X$\_$n/, n$\geq$1] be a stationary sequence of negatively associated random variables with distribution function F(x)=P(X$_1$$\leq$x). The empirical distribution function F$\_$n/(x) based on X$_1$, X$_2$,....., X$\_$n/ is proposed as an estimator for F$\_$n/(x). Strong consistency and asymptotic normality of F$\_$n/(x) are studied. We also apply these ideas to estimation of the survival function.

  • PDF

Joint Tx-Rx Optimization in Additive Cyclostationary Noise with Zero Forcing Criterion (가산성 주기정상성 잡음이 있을 때 Zero Forcing 기반에서의 송수신단 동시 최적화)

  • Yun, Yeo-Hun;Cho, Joon-Ho
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.7A
    • /
    • pp.724-729
    • /
    • 2007
  • In this paper, we consider a joint optimization of transmitter and receiver in additive cyclostationary noise with zero forcing criterion. We assume that the period of the cyclostationary noise is the same as the inverse of the symbol transmission rate and that the noise has a positive-definite autocorrelation function. The data sequence is modeled as a wide-sense stationary colored random process and the channel is modeled as a linear time-invariant system with a frequency selective impulse response. Under these assumptions and a constraint on the average power of the transmitted signal, we derive the optimum transmitter and receiver waveforms that jointly minimizes the mean square error with no intersymbol interference. The simulation results show that the proposed system has a better BER performance than the systems with receiver only optimization and the systems with no transmitter and receiver optimization.

Implementation of the Agent using Universal On-line Q-learning by Balancing Exploration and Exploitation in Reinforcement Learning (강화 학습에서의 탐색과 이용의 균형을 통한 범용적 온라인 Q-학습이 적용된 에이전트의 구현)

  • 박찬건;양성봉
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.672-680
    • /
    • 2003
  • A shopbot is a software agent whose goal is to maximize buyer´s satisfaction through automatically gathering the price and quality information of goods as well as the services from on-line sellers. In the response to shopbots´ activities, sellers on the Internet need the agents called pricebots that can help them maximize their own profits. In this paper we adopts Q-learning, one of the model-free reinforcement learning methods as a price-setting algorithm of pricebots. A Q-learned agent increases profitability and eliminates the cyclic price wars when compared with the agents using the myoptimal (myopically optimal) pricing strategy Q-teaming needs to select a sequence of state-action fairs for the convergence of Q-teaming. When the uniform random method in selecting state-action pairs is used, the number of accesses to the Q-tables to obtain the optimal Q-values is quite large. Therefore, it is not appropriate for universal on-line learning in a real world environment. This phenomenon occurs because the uniform random selection reflects the uncertainty of exploitation for the optimal policy. In this paper, we propose a Mixed Nonstationary Policy (MNP), which consists of both the auxiliary Markov process and the original Markov process. MNP tries to keep balance of exploration and exploitation in reinforcement learning. Our experiment results show that the Q-learning agent using MNP converges to the optimal Q-values about 2.6 time faster than the uniform random selection on the average.