• Title/Summary/Keyword: Stationary proces

Search Result 5, Processing Time 0.02 seconds

Existence Condition for the Stationary Ergodic New Laplace Autoregressive Model of order p-NLAR(p)

  • Kim, Won-Kyung;Lynne Billard
    • Journal of the Korean Statistical Society
    • /
    • v.26 no.4
    • /
    • pp.521-530
    • /
    • 1997
  • The new Laplace autoregressive model of order 2-NLAR92) studied by Dewald and Lewis (1985) is extended to the p-th order model-NLAR(p). A necessary and sufficient condition for the existence of an innovation sequence and a stationary ergodic NLAR(p) model is obtained. It is shown that the distribution of the innovation sequence is given by the probabilistic mixture of independent Laplace distributions and a degenrate distribution.

  • PDF

Generation of Artificial Earthquake Ground Motions using Nonstationary Random Process-Modification of Power Spectrum Compatible with Design Response Spectrum- (Nonstationary Random Process를 이용한 인공지진파 발생 -설계응답스펙트럼에 의한 파워스펙트럼의 조정-)

  • 김승훈
    • Proceedings of the Earthquake Engineering Society of Korea Conference
    • /
    • 1999.04a
    • /
    • pp.61-68
    • /
    • 1999
  • In the nonlinear dynamic structural analysis the given ground excitation as an input should be well defined. Because of the lack of recorded accelerograms in Korea it is required to generate an artificial earthquake by a stochastic model of ground excitation with various dynamic properties rather than recorded accelerograms. It is well known that earthquake motions are generally non-stationary with time-varying intensity and frequency content. Many researchers have proposed non-stationary random process models. Yeh and Wen (1990) proposed a non-stationary modulation function and a power spectral density function to describe such non-stationary characteristics. Satio and Wen(1994) proposed a non-stationary stochastic process model to generate earthquake ground motions which are compatible with design reponse spectrum at sites in Japan. this paper shows the process to modify power spectrum compatible with target design response spectrum for generating of nonstationary artificial earthquake ground motions. Target reponse spectrum is chosen by ATC14 to calibrate the response spectrum according to a give recurrence period.

  • PDF

On Stationarity of TARMA(p,q) Process

  • Lee, Oesook;Lee, Mihyun
    • Journal of the Korean Statistical Society
    • /
    • v.30 no.1
    • /
    • pp.115-125
    • /
    • 2001
  • We consider the threshold autoregressive moving average(TARMA) process and find a sufficient condition for strict stationarity of the proces. Given region for stationarity of TARMA(p,q) model is the same as that of TAR(p) model given by Chan and Tong(1985), which shows that the moving average part of TARMA(p,q) process does not affect the stationarity of the process. We find also a sufficient condition for the existence of kth moments(k$\geq$1) of the process with respect to the stationary distribution.

  • PDF

A Single Server Queue Operating under N-Policy with a Renewal Break down Process

  • Chang-Ouk Kim;Kyung-Sik Kang
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.19 no.39
    • /
    • pp.205-218
    • /
    • 1996
  • 본 연구는 써버의 고장을 허용하는 단수써버 Queueing 시스템의 확률적 모델을 제시한 것으로, 써버는 N 제어 정책에 의하여 작동되며, 도착은 Stationary compound poisson에 의하여 이루어지고, 서비스 시간에 대한 분포는 Erlang에 의하여 발생하며, 수리시간에 대한 분포는 평균이 일정한 분포에 의하여 생성되는 경우를 고려하였다. 또한 고장간격 시간은 일정한 평균을 가진 임의의 분포를 가진 Renewal process에 의한다고 가정하였고, 완료 시간의 개념은 재생과정의 적용방법에 의하여 유도할 수 있으며, 시스템 크기의 확율 생성 함수의 값이 구해진다는 것을 제시하였다.

  • PDF

Implementation of the Agent using Universal On-line Q-learning by Balancing Exploration and Exploitation in Reinforcement Learning (강화 학습에서의 탐색과 이용의 균형을 통한 범용적 온라인 Q-학습이 적용된 에이전트의 구현)

  • 박찬건;양성봉
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.672-680
    • /
    • 2003
  • A shopbot is a software agent whose goal is to maximize buyer´s satisfaction through automatically gathering the price and quality information of goods as well as the services from on-line sellers. In the response to shopbots´ activities, sellers on the Internet need the agents called pricebots that can help them maximize their own profits. In this paper we adopts Q-learning, one of the model-free reinforcement learning methods as a price-setting algorithm of pricebots. A Q-learned agent increases profitability and eliminates the cyclic price wars when compared with the agents using the myoptimal (myopically optimal) pricing strategy Q-teaming needs to select a sequence of state-action fairs for the convergence of Q-teaming. When the uniform random method in selecting state-action pairs is used, the number of accesses to the Q-tables to obtain the optimal Q-values is quite large. Therefore, it is not appropriate for universal on-line learning in a real world environment. This phenomenon occurs because the uniform random selection reflects the uncertainty of exploitation for the optimal policy. In this paper, we propose a Mixed Nonstationary Policy (MNP), which consists of both the auxiliary Markov process and the original Markov process. MNP tries to keep balance of exploration and exploitation in reinforcement learning. Our experiment results show that the Q-learning agent using MNP converges to the optimal Q-values about 2.6 time faster than the uniform random selection on the average.