• Title/Summary/Keyword: 몬테 칼로 학습

Search Result 3, Processing Time 0.017 seconds

Random Balance between Monte Carlo and Temporal Difference in off-policy Reinforcement Learning for Less Sample-Complexity (오프 폴리시 강화학습에서 몬테 칼로와 시간차 학습의 균형을 사용한 적은 샘플 복잡도)

  • Kim, Chayoung;Park, Seohee;Lee, Woosik
    • Journal of Internet Computing and Services
    • /
    • v.21 no.5
    • /
    • pp.1-7
    • /
    • 2020
  • Deep neural networks(DNN), which are used as approximation functions in reinforcement learning (RN), theoretically can be attributed to realistic results. In empirical benchmark works, time difference learning (TD) shows better results than Monte-Carlo learning (MC). However, among some previous works show that MC is better than TD when the reward is very rare or delayed. Also, another recent research shows when the information observed by the agent from the environment is partial on complex control works, it indicates that the MC prediction is superior to the TD-based methods. Most of these environments can be regarded as 5-step Q-learning or 20-step Q-learning, where the experiment continues without long roll-outs for alleviating reduce performance degradation. In other words, for networks with a noise, a representative network that is regardless of the controlled roll-outs, it is better to learn MC, which is robust to noisy rewards than TD, or almost identical to MC. These studies provide a break with that TD is better than MC. These recent research results show that the way combining MC and TD is better than the theoretical one. Therefore, in this study, based on the results shown in previous studies, we attempt to exploit a random balance with a mixture of TD and MC in RL without any complicated formulas by rewards used in those studies do. Compared to the DQN using the MC and TD random mixture and the well-known DQN using only the TD-based learning, we demonstrate that a well-performed TD learning are also granted special favor of the mixture of TD and MC through an experiments in OpenAI Gym.

Pattern Classification Using Hybrid Monte Carlo Neural Networks (변종 몬테 칼로 신경망을 이용한 패턴 분류)

  • Jeon, Seong-Hae;Choe, Seong-Yong;O, Im-Geol;Lee, Sang-Ho;Jeon, Hong-Seok
    • The KIPS Transactions:PartB
    • /
    • v.8B no.3
    • /
    • pp.231-236
    • /
    • 2001
  • 일반적인 다층 신경망에서 가중치의 갱신 알고리즘으로 사용하는 오류 역전과 방식은 가중치 갱신 결과를 고정된(fixed) 한 개의 값으로 결정한다. 이는 여러 갱신의 가능성을 오직 한 개의 값으로 고정하기 때문에 다양한 가능성들을 모두 수용하지 못하는 면이 있다. 하지만 모든 가능성을 확률적 분포로 표현하는 갱신 알고리즘을 도입하면 이런 문제는 해결된다. 이러한 알고리즘을 사용한 베이지안 신경망 모형(Bayesian Neural Networks Models)은 주어진 입력값(Input)에 대해 블랙 박스(Black-Box)와같은 신경망 구조의 각 층(Layer)을 거친 출력값(Out put)을 계산한다. 이 때 주어진 입력 데이터에 대한 결과의 예측값은 사후분포(posterior distribution)의 기댓값(mean)에 의해 계산할 수 있다. 주어진 사전분포(prior distribution)와 학습데이터에 의한 우도함수(likelihood functions)에 의해 계산한 사후확률의 함수는 매우 복잡한 구조를 가짐으로 기댓값의 적분계산에 대한 어려움이 발생한다. 따라서 수치해석적인 방법보다는 확률적 추정에 의한 근사 방법인 몬테 칼로 시뮬레이션을 이용할 수 있다. 이러한 방법으로서 Hybrid Monte Carlo 알고리즘은 좋은 결과를 제공하여준다(Neal 1996). 본 논문에서는 Hybrid Monte Carlo 알고리즘을 적용한 신경망이 기존의 CHAID, CART 그리고 QUEST와 같은 여러 가지 분류 알고리즘에 비해서 우수한 결과를 제공하는 것을 나타내고 있다.

  • PDF

MCMC Algorithm for Dirichlet Distribution over Gridded Simplex (그리드 단체 위의 디리슐레 분포에서 마르코프 연쇄 몬테 칼로 표집)

  • Sin, Bong-Kee
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.1
    • /
    • pp.94-99
    • /
    • 2015
  • With the recent machine learning paradigm of using nonparametric Bayesian statistics and statistical inference based on random sampling, the Dirichlet distribution finds many uses in a variety of graphical models. It is a multivariate generalization of the gamma distribution and is defined on a continuous (K-1)-simplex. This paper presents a sampling method for a Dirichlet distribution for the problem of dividing an integer X into a sequence of K integers which sum to X. The target samples in our problem are all positive integer vectors when multiplied by a given X. They must be sampled from the correspondingly gridded simplex. In this paper we develop a Markov Chain Monte Carlo (MCMC) proposal distribution for the neighborhood grid points on the simplex and then present the complete algorithm based on the Metropolis-Hastings algorithm. The proposed algorithm can be used for the Markov model, HMM, and Semi-Markov model for accurate state-duration modeling. It can also be used for the Gamma-Dirichlet HMM to model q the global-local duration distributions.